Skip to content

Releases: cloudposse/terraform-aws-eks-cluster

0.22.0 Update example/test to use managed Node Group. Fix race conditions when applying the Kubernetes `aws-auth` ConfigMap

27 Mar 03:17
79d7bf7
Compare
Choose a tag to compare

what

  • Update example/test to use managed Node Group instead of unmanaged worker nodes
  • Fix race conditions when applying the Kubernetes aws-auth ConfigMap

why

  • Managed Node Groups is an easier way to create an EKS cluster. Tests should use it

  • Ensure ordering of resource creation to eliminate the race conditions when applying the Kubernetes Auth ConfigMap

  • Do not create Node Group before the EKS cluster is created and the aws-auth Kubernetes ConfigMap is applied. Otherwise, EKS will create the ConfigMap first and add the managed node role ARNs to it, and the kubernetes provider will throw an error that the ConfigMap already exists (because it can't update the map, only create it)

  • If we create the ConfigMap first (to add additional roles/users/accounts), EKS will just update it by adding the managed node role ARNs.

0.21.0 Use `kubernetes` provider to apply Auth ConfigMap

24 Mar 04:29
162d71e
Compare
Choose a tag to compare

what

  • Use kubernetes provider to apply Auth ConfigMap

why

  • Don't rely on local_file to generate the Auth ConfigMap since the file gets recreated every time terraform plan/apply runs in a new container, and Terraform always tries to recreate the Auth ConfigMap

test

TestExamplesComplete 2020-03-23T16:59:01Z command.go:121: module.eks_cluster.null_resource.wait_for_cluster[0] (local-exec): EKS cluster available                                       
TestExamplesComplete 2020-03-23T16:59:01Z command.go:121: module.eks_cluster.null_resource.wait_for_cluster[0]: Creation complete after 1s [id=377251896115112725]                       
TestExamplesComplete 2020-03-23T16:59:01Z command.go:121: module.eks_cluster.kubernetes_config_map.aws_auth[0]: Creating...                                                              
TestExamplesComplete 2020-03-23T16:59:01Z command.go:121: module.eks_cluster.kubernetes_config_map.aws_auth[0]: Creation complete after 0s [id=kube-system/aws-auth]                     
Waiting for worker nodes to join the EKS cluster                                                                                                                                         
Worker Node ip-172-16-120-188.us-east-2.compute.internal has joined the EKS cluster at 2020-03-23 16:59:52 +0000 UTC                                                                     
Worker Node ip-172-16-152-158.us-east-2.compute.internal has joined the EKS cluster at 2020-03-23 16:59:53 +0000 UTC                                                                     
All worker nodes have joined the EKS cluster                                                                                                                                             

0.20.0 Add `slash-command-dispatch` GitHub Actions workflow

19 Feb 06:32
18a0bf0
Compare
Choose a tag to compare

what

  • Add slash-command-dispatch GitHub Actions workflow
  • fix for unused associate_public_ip_address

why

  • In a repo with the GitHub actions workflow present, when a PR is opened, we can comment on the PR with commands /build-readme and /terraform-fmt to rebuild README and format terraform code and push the changes back to the PR repo
  • closes #53

0.19.0 Add `eks_cluster_managed_security_group_id` output

14 Feb 19:52
b063620
Compare
Choose a tag to compare

what

  • Add eks_cluster_managed_security_group_id output

why

  • EKS managed Node Groups do not expose nor accept any Security Groups
  • Instead, EKS creates a Security Group and applies it to ENI that is attached to EKS Control Plane master nodes and to any managed workloads
  • Since that Security Group is applied to the EKS worker nodes, it can be used as a source Security Group for other resources, e.g. EFS or RDS to allow ingress traffic to the resources

0.18.0 Add `public_access_cidrs`

25 Jan 20:00
Compare
Choose a tag to compare

what

  • Add public_access_cidrs

why

  • Allow restricting access to EKS public API
  • Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. EKS defaults this to a list with 0.0.0.0/0.

references

0.17.0 Add configurable retention period for cluster logs

24 Jan 15:26
Compare
Choose a tag to compare

what

  • Add configurable retention period for cluster logs

why

  • If you add audit or other log types to the module then on cluster creation AWS will create the log group for you.
    It creates the log group with an unlimited retention period by default.
    If you then want to update the retention period you need to terraform import the log group and adjust.
    This release makes it possible to create a log group with configurable log types and log retention period.

references

0.16.0 Fix variables. Add waiting for the cluster to be ready

17 Jan 15:37
6fc1f84
Compare
Choose a tag to compare

what

  • Fix variables
  • Add waiting for the cluster to be ready before applying k8s auth map

why

  • Update variable descriptions and fix type
  • In some cases, after the cluster gets provisioned with terraform, it's still not ready and kubectl fails

0.15.0 Use the latest label module to support the `environment` attribute

06 Jan 05:21
Compare
Choose a tag to compare

what

  • Use the latest label module to support the environment attribute

why

  • Allow the environment attribute to be passed to included modules
  • Useful for naming resources

0.14.0 Add tags to `aws_eks_cluster` resource

13 Dec 17:29
Compare
Choose a tag to compare

what

  • Add tags to aws_eks_cluster resource

why

  • Missing

0.13.0 Add `oidc_provider_enabled` variable and `aws_iam_openid_connect_provider` resource

20 Nov 01:58
Compare
Choose a tag to compare

what

  • Add oidc_provider_enabled variable and aws_iam_openid_connect_provider resource

why

  • aws_iam_openid_connect_provider provisions an IAM OIDC identity provider for the cluster,
    allowing you can create IAM roles to associate with a service account in the cluster, instead of using kiam or kube2iam

references