First of all, we need to create a user to run eksctl. If you are just getting started, you may create or use an existing user with admin privileges. A better approach would be to create a user with only the permissions needed to perform its functions. For example, eksctl user should only have permissions to create and manage EKS clusters. Let me show you how to create one.
Go to AWS console and select IAM service.
In this example, we're going to create a user with minimum IAM policies needed to run the main use cases of eksctl. These are the ones used to run the integration tests. We will use both managed IAM policies, and we will have to create our own policies as well.
Let's create IAM policies first. Give it a name EksAllAccess and don't forget to replace account_id.
We can attach IAM policies directly to the IAM user or follow the best practices and create an IAM group first.
Create EKS IAM group and attach the following policies:
EksAllAccess (Customer Managed Policy)
IamLimitedAccess (Customer Managed Policy)
AmazonEC2FullAccess (AWS Managed Policy)
AWSCloudFormationFullAccess (AWS Managed Policy)
After that, we can create an IAM user. Give it a name eksctl and place it in the EKS group. Don't forget to download credentials; we will use them to create an AWS profile.
Before proceeding to the next step, make sure that you have AWS CLI installed on your working machine. Depending on the operating system, you can follow one of those instructions to install the tool.
To run eksctl locally, you can export environment variables with your access key and secret or create a profile which is more convenient, in my opinion. Example with environment variables:
If you just run aws configure, it will create a default profile. It's fine, but I don't really want to use that eksctl user for anything else rather than creating and managing EKS clusters. You can add a --profile option; this will create a named profile. Also, select the region that you want to use, in my case us-east-1.
awsconfigure--profileeksctl
To verify that everything is configured correctly, run the following command:
awsstsget-caller-identity--profileeksctl
You should get a json object with your user (example).
In this section, we will create a simple public EKS cluster using eksctl. If you don't have eksctl installed, follow one of those instructions. By public, I mean a Kubernetes cluster with nodes that have public IP addresses. You don't need a NAT gateway for that setup.
The simplest way is to run create command. It will create VPC as well as an EKS cluster for you. It may be the easiest way but rarely what you want.
eksctlcreatecluster--profileeksctl
eksctl will create CloudFormation stacks to create EKS cluster and instance groups. In case of an error, you can always inspect the stack itself.
You can customize your cluster by providing the flags to the eksctl tool. However, the best approach would be to create a config file. In that case, it's much easier to reproduce the same infrastructure and track the changes in the git.
When the creation is completed, eksctl will automatically configure the kubectl context. You can immediately access the Kubernetes cluster. Run kubectl get svc to get Kubernetes api service from default namespace. You can also run kubectl get nodes to get all available Kubernetes workers.
To delete the cluster, run the delete command and provide the name of the cluster.
For this example, we will create an EKS cluster with only private nodes. You still will be able to expose services to the internet using a load balancer and ingress. It's just Kubernetes nodes will not be exposed to the internet, which is rarely allowed by the companies.
Let's create an eksctl config to define the cluster properties. You can find all the possible options here.
Often you already have a VPC, and you want to create an EKS cluster in the same network. You can do it with eksctl; you just need to provide a few additional variables.
Create main VPC with the following IPv4 CIDR block 10.0.0.0/16
Create igw Internet Gateway and attach it to the main VPC.
Create 4 subnets in 2 different availability zones.
private-us-east-1a, CIDR: 10.0.0.0/18
private-us-east-1b, CIDR: 10.0.64.0/18
public-us-east-1a, CIDR: 10.0.128.0/18
public-us-east-1b, CIDR: 10.0.192.0/18
Allocate public IP address for NAT Gateway. Give it a name nat.
Create NAT gateway with nat name and place it in one of the public subnets.
Create 2 routing tables. One for private subnets with a default route (0.0.0.0/0) to NAT Gateway, and one for public subnets with a default route(0.0.0.0/0) to Internet Gateway.
Associate subnets with proper routing tables.
Create a new eksctl config with existing VPC and subnets.
Amazon EKS supports IAM Roles for Service Accounts (IRSA) that allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts.
This provides fine-grained permission management for apps that run on EKS and use other AWS services. These could be apps that use S3, any other data services (RDS, MQ, STS, DynamoDB), or Kubernetes components like AWS Load Balancer controller or ExternalDNS.
Let's create an AllowListAllMyBuckets IAM policy to allow list all the buckets in this AWS account.
---apiVersion:eksctl.io/v1alpha5kind:ClusterConfigmetadata:name:cluster-irsaregion:us-east-1availabilityZones:-us-east-1a-us-east-1biam:withOIDC:trueserviceAccounts:-metadata:name:foonamespace:stagingattachPolicyARNs:-arn:aws:iam::<account_id>:policy/AllowListAllMyBuckets# role has to start with eksctl-*roleName:eksctl-list-s3-bucketsroleOnly:true-metadata:name:cluster-autoscalernamespace:kube-systemwellKnownPolicies:autoScaler:trueroleName:eksctl-cluster-autoscalerroleOnly:truemanagedNodeGroups:-name:generaltags:# EC2 tags required for cluster-autoscaler auto-discoveryk8s.io/cluster-autoscaler/enabled:"true"k8s.io/cluster-autoscaler/<cluster-name>:"owned"desiredCapacity:1minSize:1maxSize:10
Create a new EKS cluster with a single autoscaling node group.