By David Dixon
July 31, 2024
Recently, I wanted to take a look at turning my website into a cloud native application;
designed to meet cloud-like demands of auto-scaling, self-healing, rolling updates, rollbacks and more.
I also wanted to use it as an opportunity to take a look at Amazon Web Services (AWS) offering to achieve this.
The website I am working with is static content. There isn’t much else to it besides some HTML and CSS.
Since we’re running on an EC2 instance right now, we need to containerize our site. Here is an overview of what our process will look like.
The image below gives an overview of the actions involved with migrating our website to a container, and then pushing to the ECR Repo.
This image paints a broad picture of what is needed to transform our basic HTML site to a container and get it shipped to ECR. Let's jump in!
To set up the HTML for our site, I have some dummy code below. You can use this, or feel free to use your own:
<!DOCTYPE html>
<html>
<body>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
</body>
</html>
#!/bin/bash
sudo yum update -y
sudo amazon-linux-extras install docker
sudo yum install -y docker
sudo service docker start
sudo systemctl enable docker
sudo usermod -a -G docker ec2-user
#!/bin/bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo yum update -y
sudo yum install unzip -y
unzip awscliv2.zip
sudo ./aws/install
ssh -i your.pem ec2-user@EC2-IP
docker --version
aws --version
aws configure
vim index.html
"PASTE HTML in the index.html file"
:wq (write and exit vim)
vim Dockerfile
#Nginx base image
FROM nginx:alpine
#Copy the website files to the nginx html directory
COPY . /usr/share/nginx/html
#Expose port 80 and 443
EXPOSE 80
EXPOSE 443
#Start nginx when the container launches
CMD ["nginx", "-g", "daemon off;"]
docker build -t myimagename .
docker image ls
docker run -d -p 80:80 myimagename
Let’s take a look at the overview of what we want to achieve in the next section. At this point we have our image tagged and pushed to ECR.
We’re going to set up our client machine to interact with the Control Plane running on the EKS cluster we will create.
The cluster will be housed in a Custom VPC which will run 2 separate nodes behind a Elastic Load Balancer.
We will use the EC2 machine created in Part 1 to interact with the cluster.
A breakdown of the steps to accomplish this is defined below.
We should be able to re-use some of the steps from part 1 (IAM, Client Machine, SSH), but they are important none the less.
Ok, let’s get started on part 2. Remember the IAM you created with AdministratorAccess in the AWS IAM console? We are going to be using this for our next steps as well.
We’ll also just reuse our same EC2 instance from part 1 to connect to our EKS Cluster.
1. IAM (again?)
If you’ve already done this on the EC2 instance from part 1, then don’t sweat it; jump to step 2. If not then, navigate to the security credentials interface.
Create a user with Administrator Access.
Once this is done, save the security credentials in a secure location.
You’ll need the Access Key and Secret Key moving forward when we perform the AWS CLI config.
2. EC2 Instance (Client)
let’s set up the client machine. We will use this machine to manage our K8S cluster.
You should have set up an EC2 instance from part 1. If not then prepare the following for your Amazon Linux EC2 instance:
#! /bin/bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo yum update
sudo yum install unzip
unzip awscliv2.zip
sudo ./aws/install
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
aws --version
eksctl --version
kubectl --version
aws configure
eksctl create cluster --name=mycluster \
--region=us-east-1 \
--zones=us-east-1a,us-east-1b \
--nodegroup-name mynodegroup \
--node-type=t2.micro \
--nodes=2 \
--nodes-min=2 \
--nodes-max=4 \
--managed
aws eks update-kubeconfig --region us-east-1 --name mycluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginxdeply
spec:
replicas: 2
selector:
matchLabels:
app: mynginxpod
template:
metadata:
labels:
app: mynginxpod
spec:
containers:
- name: mynginx
image: {image URI} public.ecr.aws/a1b2c3d4/my_nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mynginxserv
spec:
selector:
app: mynginxpod
ports:
- protocol: TCP
port: 80
type: LoadBalancer
kubectl apply -f deploy.yaml
kubectl get deploy
kubectl get rs
kubectl get pod
kubectl get svc
If all jobs completed without error, get the load balancer’s DNS name and paste it in your browser (also check your security groups).
You should be able to now view your application!
Great job! We took a quick look at how we can leverage AWS ECS, EKS, along with containers to build a scalable, cloud native website.
This approach can be leveraged for webapps, microservices, and other applications that need all the features that a cloud native design provides.
Afterwards if you want to remove your cluster, you can run the following eksctl command:
eksctl delete cluster --name mycluster --region us-east-1