First of all, we need to create a user to run eksctl. If you are just getting started, you may create or use an existing user with admin privileges. A better approach would be to create a user with only the permissions needed to perform its functions. For example, eksctl user should only have permissions to create and manage EKS clusters. Let me show you how to create one.
Go to AWS console and select IAM service.
In this example, we're going to create a user with minimum IAM policies needed to run the main use cases of eksctl. These are the ones used to run the integration tests. We will use both managed IAM policies, and we will have to create our own policies as well.
Let's create IAM policies first. Give it a name EksAllAccess and don't forget to replace account_id.
If you just run aws configure, it will create a default profile. It's fine, but I don't really want to use that eksctl user for anything else rather than creating and managing EKS clusters. You can add a --profile option; this will create a named profile. Also, select the region that you want to use, in my case us-east-1.
To verify that everything is configured correctly, run the following command:
You should get a json object with your user (example).
In this section, we will create a simple public EKS cluster using eksctl. If you don't have eksctl installed, follow one of those instructions. By public, I mean a Kubernetes cluster with nodes that have public IP addresses. You don't need a NAT gateway for that setup.
The simplest way is to run create command. It will create VPC as well as an EKS cluster for you. It may be the easiest way but rarely what you want.
eksctl will create CloudFormation stacks to create EKS cluster and instance groups. In case of an error, you can always inspect the stack itself.
You can customize your cluster by providing the flags to the eksctl tool. However, the best approach would be to create a config file. In that case, it's much easier to reproduce the same infrastructure and track the changes in the git.
When the creation is completed, eksctl will automatically configure the kubectl context. You can immediately access the Kubernetes cluster. Run kubectl get svc to get Kubernetes api service from default namespace. You can also run kubectl get nodes to get all available Kubernetes workers.
To delete the cluster, run the delete command and provide the name of the cluster.
For this example, we will create an EKS cluster with only private nodes. You still will be able to expose services to the internet using a load balancer and ingress. It's just Kubernetes nodes will not be exposed to the internet, which is rarely allowed by the companies.
Let's create an eksctl config to define the cluster properties. You can find all the possible options here.
Amazon EKS supports IAM Roles for Service Accounts (IRSA) that allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts.
This provides fine-grained permission management for apps that run on EKS and use other AWS services. These could be apps that use S3, any other data services (RDS, MQ, STS, DynamoDB), or Kubernetes components like AWS Load Balancer controller or ExternalDNS.
Let's create an AllowListAllMyBuckets IAM policy to allow list all the buckets in this AWS account.
---apiVersion:eksctl.io/v1alpha5kind:ClusterConfigmetadata:name:cluster-irsaregion:us-east-1availabilityZones:-us-east-1a-us-east-1biam:withOIDC:trueserviceAccounts:-metadata:name:foonamespace:stagingattachPolicyARNs:-arn:aws:iam::<account_id>:policy/AllowListAllMyBuckets# role has to start with eksctl-*roleName:eksctl-list-s3-bucketsroleOnly:true-metadata:name:cluster-autoscalernamespace:kube-systemwellKnownPolicies:autoScaler:trueroleName:eksctl-cluster-autoscalerroleOnly:truemanagedNodeGroups:-name:generaltags:# EC2 tags required for cluster-autoscaler auto-discoveryk8s.io/cluster-autoscaler/enabled:"true"k8s.io/cluster-autoscaler/<cluster-name>:"owned"desiredCapacity:1minSize:1maxSize:10
Create a new EKS cluster with a single autoscaling node group.