AWS EKS

EKS Overview

Control Plane

EKS is Amazon Elastic Kubernetes Service, its a managed Kubernetes services in the cloud. (although they have options for on prem). Managed EKS means AWS will manage the control plane of the K8s cluster. This means it will provision/maintain master nodes, install all the control plane processes (API Server, Scheduler, Controller Manager and etcd). They will also help with scaling and backups. All of this is in an AWS VPC.

Security and best practices are managed by AWS. Integration with other AWS services is easy. (S3, IAM, Secrets Manager, Load Balancer)

Data Plane

The worker nodes (this is in the data plane and in your VPC) are the only nodes you need to worry about.

We can setup worker nodes in different ways

  • Self-managed nodes
    • Provision EC2 instances manually that you want to use as worker nodes
    • Install processes like Kubelet, Kube-proxy, Container runtime
    • You will need to update/patch the servers yourself
    • Register node with the control plane
  • Managed node group
    • AWS automates the provisioning and lifecycle of the EC2 nodes
    • This uses a EKS optimized image
    • Uses single AWS/EKS API call (Create, Update, Terminate)
    • Nodes are part of an Auto Scaling Group managed by EKS
  • Fargate
    • Serverless architecture, you dont have to provision worker nodes (mantain EC2 instances)
    • When you deploy resources to the K8s cluster, fargate will provision worker nodes on demand
    • Based on your requirements it figures out the most optimal EC2 sizing
    • You only pay for what you use

Create EKS Cluster

Needs the following to run

  • Cluster name and K8s version
  • IAM role for cluster to run (privileges like provision nodes, access storage, secrets)
  • VPC and Subnets to run the cluster on
  • Needs security group (allow traffic to and from cluster)

Create Worker Nodes

The high level steps are

  • Create node group
  • Specify instance type
  • Define min/max number of nodes you want
  • Specify the EKS cluster to connect to

Connect To Cluster

This is from our local machine.

There are several ways to do this:

  • AWS Console
    • AWS UI using EKS and the configuration wizard
    • This is long winded, you need to create cluster, create worker nodes, setup kubectl locally
    • Provision VPC, subnets and routing
  • EKSCTL
    • Sets up the cluster with a single command
    • This will provision all you need (Control plane, VPCs, subnets, worker nodes)
  • Infasturcture As Code (Terraform)
    • Define infastructure configuration in code
    • Deploy by using Terraform/Pulumi

eksctl

There are several commands you can see with eksctl --help. You can also run help for the sub commands like eksctl create --help and eksctl create cluster --help

  • eksctl create cluster --name mycluster1 --nodegroup-name mynodegroup1 --region us-east1 --node-type t2.micro --nodes 2 will create the cluster with the given name, node group, region ect. This will take several minutes.
    • The VPC and subnet names will be prefixed with eksctl-
    • This will update your kubectl (in ~/username/.kube/config)
  • eksctl delete cluster --name mycluster1
    • This will also delete the other associated resources like the VPC

kubectl

Basic kubectl commands, also see Raspberry Pi Cluster - Kubernetes

  • kubectl config view Will show the cluster config in ~/username/.kube/config
  • kubectl get nodes Will list the EC2 instance nodes in the cluster
  • kubectl get pods Will list all of the pods that are available in the cluster, their names, whether they are ready or not, their status, restarts and age
  • kubectl logs <pod name> Will print the logs from a given pod name.
  • kubectl describe pod <pod name> Will print detailed information about such as information on the containers in the pod, its volumes, events and other meta data.
  • kubectl exec <pod name> -- <command> Will run the specified command in the container of the pod.
  • kubectl exec <pod name> -- env Will use the exec command, and run the env command in the container for the pod.
  • kubectl rollout restart deployment <deployment_name> Will restart pods one by one without any downtime.
  • kubectl config set-cluster

References