Business Use Application/Relevance:
- This project shows how to create multi-path traffic directions for two different kubernetes deployments(nginx vs apache). In production workspaces this could be used to provision different paths for getting to different web applications hosted by a company.
Challenges I Faced and How I Solved Them:
- Failed nginx & apache endpoints. This happened due to routing complexities. I had to do some debugging and documentation reading to figure this out.
Project Steps:
Creating Terraform Files For EKS Cluster and EBS attached volume:
– I will navigate to GitHub.com and create a new repository:
- I will create a dev branch for testing before merging with my main branch:
- I will then clone the repository onto my local machine(I used GitHub desktop):
- I will then open the repo in Visual Studio Code so that I can make my manifest files there:
- I will navigate to the terraform EKS cluster page and copy it to a main.tf file in my repository and make changes:
provider "aws" {
region = "us-east-1"
}
resource "aws_eks_cluster" "example" {
name = "dtmcluster"
access_config {
authentication_mode = "API"
}
role_arn = aws_iam_role.cluster.arn
version = "1.32"
bootstrap_self_managed_addons = false
compute_config {
enabled = true
node_pools = ["general-purpose"]
node_role_arn = aws_iam_role.node.arn
}
kubernetes_network_config {
elastic_load_balancing {
enabled = true
}
}
storage_config {
block_storage {
enabled = true
}
}
vpc_config {
endpoint_private_access = true
endpoint_public_access = true
subnet_ids = [
var.subnet_1,
var.subnet_2,
]
}
# Ensure that IAM Role permissions are created before and deleted
# after EKS Cluster handling. Otherwise, EKS will not be able to
# properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSComputePolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSBlockStoragePolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSLoadBalancingPolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSNetworkingPolicy,
]
}
resource "aws_iam_role" "node" {
name = "eks-auto-node-example"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["sts:AssumeRole"]
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "node_AmazonEKSWorkerNodeMinimalPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_AmazonEC2ContainerRegistryPullOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly"
role = aws_iam_role.node.name
}
resource "aws_iam_role" "cluster" {
name = "eks-cluster-example"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sts:AssumeRole",
"sts:TagSession"
]
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSComputePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSComputePolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSBlockStoragePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSLoadBalancingPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSNetworkingPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy"
role = aws_iam_role.cluster.name
}
- For my terraform.tf file I will use the ff:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "6.0.0-beta1"
}
}
backend "s3" {
bucket = "<your-bucket-name>"
key = "backend2.hcl"
region = "us-east-1"
}
required_version = ">= 1.0.0"
}
- To initiate creating my cluster I will via CLI in my directory use the commands:
export AWS_ACCESS_KEY_ID=<my-aws-access-key-id>
export AWS_SECRET_ACCESS_KEY=<my-aws-secret-access-key.
terraform init
terraform fmt
terraform validate
terraform plan
terraform apply -auto-approve
- I tried adding the EFS CSI and CoreDNS add-ons with terraform but they took forever so I just added them via the console:
- I will configure IAM access entry to my EKS cluster:
- To connect to my cluster on my local machine I will use the command:
aws configure
#Then fill out the required constraints
#Then
aws eks update-kubeconfig --name=<cluster-name>
alias k=kubectl
- I will then add a managed node group so that ec2 instances can be provisioned for my cluster. I will start by creating an IAM role that will allow my eks to have ec2 instances in the cluster:
- I will give myself permission to access the cluster:
(my AWS account # not shown for security reasons)
- I will add these policies:
- I will on the final page create my access entry with EKS to my
cluster:
Service Account with IAM Role For Service Account Creation:
– I will create a namespace called luit by:
k create ns luit
- I will enable my IAM OIDC provider since it’s not enabled by default:
eksctl utils associate-iam-oidc-provider --cluster=<clusterName> --approve
Nginx Webserver Deployment:
– I will create a manifest file (nginx-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-web
namespace: luit
spec:
replicas: 3
selector:
matchLabels:
app: nginx-web
template:
metadata:
labels:
app: nginx-web
spec:
containers:
- name: nginx-web
image: public.ecr.aws/docker/library/nginx:stable-bookworm
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "250m"
- I will then create the deployment with the command:
k create -f nginx-deployment.yaml
Exposing Nginx Deployment with ClusterIP Service:
– I will create a service manifest file that exposes my nginx deployment called service-nginx.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: luit
spec:
selector:
app: nginx-web
ports:
- protocol: TCP
port: 80
targetPort: 80
- I will then create the service with the command:
k create -f service-nginx.yaml
Apache Deployment Manifest Creation:
– I will create a manifest file(apache-deployment.yaml) containing the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-deployment
namespace: luit
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: public.ecr.aws/docker/library/httpd:latest
ports:
- containerPort: 80
command: ["/bin/sh", "-c"]
args:
- |
mkdir -p /usr/local/apache2/htdocs/apache &&
echo "Welcome to Apache under /apache" > /usr/local/apache2/htdocs/apache/index.html &&
httpd-foreground
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
- I will then create the deployment using the command:
k create -f apache-deployment.yaml
Exposing Apache Deployment with ClusterIp Service:
– I will create a file called service-apache.yaml and enter the following within:
apiVersion: v1
kind: Service
metadata:
name: apache-service
namespace: luit
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
- I will then create the service with the command:
k create -f service-apache.yaml
AWS Load Balancer Ingress Controller Installation:
I will reference the documentation here for how to install this via helm:
- I will download an IAM Role for my load balancer that allows it to make API calls on my behalf:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.13.0/docs/install/iam_policy.json
- I will create an IAM policy based on the policy I just pulled:
aws iam create-policy
--policy-name AWSLoadBalancerControllerIAMPolicy
--policy-document file://iam_policy.json
- create a service account with the following annotation:
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-load-balancer-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<ALB_CONTROLLER_ROLE_NAME>
- Apply it:
kubectl apply -f aws-load-balancer-service-account.yaml
- Next step:
eksctl create iamserviceaccount
--cluster=<cluster-name>
--namespace=kube-system
--name=aws-load-balancer-controller
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy
--override-existing-serviceaccounts
--region <aws-region-code>
--approve
- I will add the eks charts repo:
helm repo add eks https://aws.github.io/eks-charts
- I will update the repo:
helm repo update eks
- I will install the load balancer controller:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller
-n kube-system
--set clusterName=my-cluster
--set serviceAccount.create=false
--set serviceAccount.name=aws-load-balancer-controller
--version 1.13.0
Kubernetes Ingress Creation with Multi-Path Routing For The Deployments:
Based on documentation I will create an ingress resource manifest file that upon creation will help me with two path-traffic two my pods based on path. The manifest file( ingress.yaml) is as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: luit-ingress
namespace: luit
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/group.name: luit-group
alb.ingress.kubernetes.io/backend-protocol: HTTP
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
- path: /apache
pathType: Prefix
backend:
service:
name: apache-service
port:
number: 80
- I will create the ingress resource using the command:
k create -f ingress.yaml
- To check the ingress resource and how to access my deployments based the Application load balancer DNS, I will use the command:
k -n luit get ingress
- To access the deployments based on path I will use via CLI:
For nginx:
curl http://k8s-luitgroup-399b9c7f5e-968629939.us-east-1.elb.amazonaws.com/
and for apache:
curl http://k8s-luitgroup-399b9c7f5e-968629939.us-east-1.elb.amazonaws.com/apache
- To access by the containers by web, I will use:
For apache:
http://<ALB-DNS>/apache/
and for nginx path:
http://<ALB-DNS>/
CleanUp:
- I will use the ff to destroy my created resources:
terraform destroy -auto-approve
- I will also delete any load balancer create to avoid racking up costs
Sources: