skip.link.title

How to Add IAM User and IAM Role to AWS EKS Cluster?

  • To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
  • You can find the source code for this video in my GitHub Repo.

Create AWS VPC using Terraform

Personally, I don’t like to use a lot of variables when I try to teach something. In production code, you of course would refactor and parameterize most of the values in the code. But anyway, let’s define the most common parameters that we’ll use in Terraform as local variables.

0-locals.tf

0-locals.tf
1
2
3
4
5
6
7
8
locals {
  env         = "staging"
  region      = "us-east-2"
  zone1       = "us-east-2a"
  zone2       = "us-east-2b"
  eks_name    = "demo"
  eks_version = "1.29"
}

First of all, we need to define the environment variable. We frequently create multiple environments; for example, you may have a development environment where your developers multiple times a day can deploy their apps to test, and it can be frequently broken. You can have a staging environment where you would test your applications together and maybe run some integration tests, and of course, you would have a production environment. The number of environments differs between companies, but it’s never just a single production environment, except perhaps for startups that do not really have any paying customers.

We usually host multiple environments in one AWS account and use an environment prefix to avoid clashes between different objects such as IAM policies and roles.

Also, I use numbers to prefix Terraform files. It’s intentionally just to clarify the order in which you would create resources. In production, you would not use it.

0-locals.tf
1
2
3
4
5
6
7
8
locals {
  env         = "staging"
  region      = "us-east-2"
  zone1       = "us-east-2a"
  zone2       = "us-east-2b"
  eks_name    = "demo"
  eks_version = "1.29"
}

Then we have a region where we want to host our infrastructure.

0-locals.tf
1
2
3
4
5
6
7
8
locals {
  env         = "staging"
  region      = "us-east-2"
  zone1       = "us-east-2a"
  zone2       = "us-east-2b"
  eks_name    = "demo"
  eks_version = "1.29"
}

When you create an EKS cluster, it would require you to have multiple subnets in at least two different availability zones. You can think of an availability zone as a separate data center, so it's done to improve your availability. If something happens in one zone, you would still have an operational cluster.

Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane.

0-locals.tf
1
2
3
4
5
6
7
8
locals {
  env         = "staging"
  region      = "us-east-2"
  zone1       = "us-east-2a"
  zone2       = "us-east-2b"
  eks_name    = "demo"
  eks_version = "1.29"
}

Then we have the EKS cluster name, which we will also use as a prefix for IAM roles and policies, just in case we create multiple EKS clusters in the same AWS account.

0-locals.tf
1
2
3
4
5
6
7
8
locals {
  env         = "staging"
  region      = "us-east-2"
  zone1       = "us-east-2a"
  zone2       = "us-east-2b"
  eks_name    = "demo"
  eks_version = "1.29"
}

And finally, we need to specify the EKS version; at this moment, 1.29 is the latest supported Kubernetes version in AWS. You can always check this page to find the latest version.

top.title