Easy Start Into Kubernetes With EKS Auto Mode and Eksctl

Easy Start Into Kubernetes With EKS Auto Mode and Eksctl

As a Solutions Architect, my area of interest has always been Automation and how it can help developers be more effective and make their lives easier. Many developers are embracing Kubernetes as an environment for their applications. However, setting up a vanilla Kubernetes cluster in the cloud might be quite challenging for a developer who is not keen on doing it the “hard way”. Local setup, with various tools, is easier but takes resources from a developer’s computer. Wouldn’t it be great if we had a simple way to spin up and tear down a Kubernetes cluster in the cloud in a matter of minutes and immediately use it? Now, with Amazon EKS Auto Mode, it is possible! Let’s find out how.

Overview of a solution

Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in both AWS Cloud and on-premises data centers. There are several methods of deploying an EKS cluster. You can find some of them in the documentation or in the launch blog. Today I will use eksctl, a tool which simplifies the deployment of an EKS cluster.

Prerequisites

Before you start, you need to ensure that you have:

Walkthrough

Creating an EKS cluster

  1. Configure AWS credentials.

  2. Create an EKS Auto Mode cluster with minimal required parameters:

1
eksctl create cluster --name=easy-cluster --enable-auto-mode
  1. Wait up to 15 minutes, and here it is - a brand new Kubernetes cluster. Task accomplished, we can go home start using the cluster. Could it be easier?

But is it fully functional? Let’s deploy a sample web application.

Deploying a sample application.

  1. Get access to the EKS cluster:
1
aws eks --region $AWS_REGION update-kubeconfig --name easy-cluster
  1. Get the EKS cluster nodes:
1
2
kubectl get nodes
eks-node-viewer
  1. You should receive “No resources found”. So, we not only got an easy setup, but also don’t waste resources on EC2 nodes until we deploy a workload.

eks-node-viewer-initial

  1. Deploy a sample load balancer workload to the EKS Auto Mode cluster using this instructions or the code snippet below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
EOF

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: public.ecr.aws/l6m2t8p7/docker-2048:latest
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
resources:
requests:
cpu: "0.5"
EOF

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: app-2048
EOF

cat <<EOF | kubectl create -f -
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
namespace: game-2048
labels:
app.kubernetes.io/name: LoadBalancerController
name: alb
spec:
controller: eks.amazonaws.com/alb
EOF

cat <<EOF | kubectl create -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: game-2048
name: ingress-2048
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-2048
port:
number: 80
EOF
  1. Check that all resources have deployed properly. It might take 3-5 minutes to deploy the Application Load Balancer (ALB):
1
2
3
4
kubectl get nodes -o wide
kubectl get pods -n game-2048
kubectl get svc -n game-2048
kubectl get ingress -n game-2048
1
2
3
URL=http://$(kubectl get ingress -n game-2048 -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
while [[ $(curl -s -o /dev/null -w "%{http_code}" $URL/) != "200" ]]; do echo "Service not yet available ..." && sleep 5; done
echo $URL
  1. Now you have an up and running EKS cluster with your workloads. You can also scale the number of replicas in the deployment to 10 and back to zero to see how EKS cluster nodes are created and destroyed.

eks-node-viewer-5

1
kubectl scale deployment deployment-2048 -n game-2048 --replicas=10

eks-node-viewer-10

1
kubectl scale deployment deployment-2048 -n game-2048 --replicas=0

eks-node-viewer-0

Cleaning up

  1. Delete deployed resources:
1
kubectl delete namespace game-2048
  1. Delete the EKS cluster:
1
eksctl delete cluster --name=easy-cluster

Make sure that Cloudformation Stack for the EKS custer deleted completely.

Conclusion

In this blog post, I demonstrated how you can deploy an Amazon EKS cluster in Auto Mode using eksctl. This setup not only gives you the ability to start fast, but also delivers a ready-to-use EKS cluster.

Developers can create an EKS cluster when they need it, with minimal effort, and delete it when it’s no longer needed, reducing expenses.

Let’s go build!

Comments