Implementation of EKS setup using Terraform. Terraform module located in terraform directory supports deployment to different AWS partitions. I have tested it with commercial
and china
partitions. I am actively using this configuration to run EKS setup in Ireland(eu-west-1), London(eu-west-2), North Virginia(us-east-1) and Beijing(cn-north-1).
Module creates:
- VPC
- VPC Endpoints- S3, ECR, STS, APS, GuardDuty
- EKS Cluster
- EKS Node Group to run cluster critical services
- EKS Addons- coredns, kube-proxy, guardduty, aws-ebs-csi-driver, adot (requires cert-manger to be installed), kubecost, cloudwatch observability, snapshot-controller and identity agent
- IAM Roles for worker nodes and Karpenter nodes
- Additional IAM Roles for operators- load-balancer-controller, external-dns, cert-manager, adot-collector
- SQS queue configuration to be used with Karpeneter while utlising Spot Instances.
- CloudWatch log groups used by container insights.
I am utilising Flux2 to deploy all additional configurations. You can find them at https://github.com/marcincuber/kubernetes-fluxv2 I have built this as a separate repository to show how to develop a successful configuration for your own cluster using GitOps FluxV2 and Helm.
You will find configurations for:
- AWS Load Balancer controller
- AWS node termination handler
- Cert Manager
- External-DNS
- External Secrets Operator
- Metrics server
- Reloader
- VPC CNI Plugin
- EBS CSI Driver
- and more :)
Check out my stories on Medium if you want to learn more about specific topics.
Amazon EKS upgrade journey from 1.30 to 1.31
Amazon EKS upgrade journey from 1.29 to 1.30
Amazon EKS upgrade journey from 1.28 to 1.29
Amazon EKS upgrade journey from 1.27 to 1.28
Amazon EKS upgrade journey from 1.26 to 1.27
Amazon EKS upgrade journey from 1.25 to 1.26
Recovering Images from EKS node back to ECR
Karpenter Upgrade guide from Beta to v1 API version
Migrate Karpenter resources from alpha to beta API version
Kube-bench implementation with EKS
ECR pull through cache for DockerHub, Github, Quay, ECR public and kubernetes
More about my configuration can be found in the blog post I have written recently -> EKS design
Amazon EKS- RBAC with IAM access
Using OIDC provider to allow service accounts to assume IAM role
Amazon EKS, setup external DNS with OIDC provider and kube2iam
Amazon EKS + managed node groups
Terraform module written by me can be found in -> https://registry.terraform.io/modules/umotif-public/eks-node-group
Kubernetes GitLab Runners on Amazon EKS
EKS platforms information Worker nodes upgrades
On user's machine who has been added to EKS, they can configure .kube/config file using the following command:
$ aws eks list-clusters
$ aws eks update-kubeconfig --name ${cluster_name}