First of all, we need to create VPC using terraform. In this video, I'm not going to go over each configuration parameter of each terraform resource as I did in the previous videos.
Provider with some variables such as EKS cluster name and a region.
Now we need to create another IAM role for Kubernetes nodes. It's going to be used by the regular node pool and not Karpenter.
You have two options, either to use the same IAM role and create an instance profile for Karpenter or you can create a dedicated IAM role.
But in this case, you would need to manually update auth configmap to authorize nodes created by karpenter with a new IAM role to join the cluster.
resource"aws_iam_role""nodes"{name="eks-node-group"assume_role_policy=jsonencode({Statement=[{Action="sts:AssumeRole"Effect="Allow"Principal={Service="ec2.amazonaws.com"}}]Version="2012-10-17"})}resource"aws_iam_role_policy_attachment""amazon-eks-worker-node-policy"{policy_arn="arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"role=aws_iam_role.nodes.name}resource"aws_iam_role_policy_attachment""amazon-eks-cni-policy"{policy_arn="arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"role=aws_iam_role.nodes.name}resource"aws_iam_role_policy_attachment""amazon-ec2-container-registry-read-only"{policy_arn="arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"role=aws_iam_role.nodes.name}resource"aws_eks_node_group""private-nodes"{cluster_name=aws_eks_cluster.cluster.nameversion="1.22"node_group_name="private-nodes"node_role_arn=aws_iam_role.nodes.arnsubnet_ids=[aws_subnet.private-us-east-1a.id,aws_subnet.private-us-east-1b.id]capacity_type="ON_DEMAND"instance_types=["t3.small"]scaling_config{desired_size=1max_size=10min_size=0}update_config{max_unavailable=1}labels={role="general"}depends_on=[aws_iam_role_policy_attachment.amazon-eks-worker-node-policy,aws_iam_role_policy_attachment.amazon-eks-cni-policy,aws_iam_role_policy_attachment.amazon-ec2-container-registry-read-only,] # Allow external changes without Terraform plan differencelifecycle{ignore_changes=[scaling_config[0].desired_size]}}
Now let's again apply the terraform to create an EKS cluster.
terraformapply
To connect to the cluster you need to update the Kubernetes context with this command.
Then the quick check if we can reach Kubernetes. It should return the default k8s service.
kubectlgetsvc
As I mentioned before, if you decide to create a separate IAM role and instance profile you would need to edit the auth configmap to add the ARN of the new role.
Karpenter needs permissions to create EC2 instances in AWS. If you use a self-hosted Kubernetes cluster, for example by using kOps.
You can add additional IAM policies to the existing IAM role attached to Kubernetes nodes.
We use EKS, the best way to grant access to internal service would be with IAM roles for service accounts.
First, we need to create an OpenID Connect provider.
Next is a trust policy to allow the Kubernetes service account to assume the IAM role.
Make sure that you deploy Karpenter to the karpenternamespace with the same service account name.
To deploy Karpenter to our cluster, we're going to use Helm. First of all, you need to authenticate with EKS using the helm provider. Then the helm release.
Before we can test Karpenter, we need to create a Provisioner. Karpenter defines a Custom Resource called a Provisioner to specify provisioning configuration.
Each provisioner manages a distinct set of nodes. You need to replace the demo with your EKS cluster name.
---apiVersion:karpenter.sh/v1alpha5kind:Provisionermetadata:name:defaultspec:ttlSecondsAfterEmpty:60# scale down nodes after 60 seconds without workloads (excluding daemons)ttlSecondsUntilExpired:604800# expire nodes after 7 days (in seconds) = 7 * 60 * 60 * 24limits:resources:cpu:100# limit to 100 CPU coresrequirements:# Include general purpose instance families-key:karpenter.k8s.aws/instance-familyoperator:Invalues:[c5,m5,r5]# Exclude small instance sizes-key:karpenter.k8s.aws/instance-sizeoperator:NotInvalues:[nano,micro,small,large]providerRef:name:my-provider---apiVersion:karpenter.k8s.aws/v1alpha1kind:AWSNodeTemplatemetadata:name:my-providerspec:subnetSelector:kubernetes.io/cluster/demo:ownedsecurityGroupSelector:kubernetes.io/cluster/demo:owned
Finally, use kubectl to create those resources in the cluster.