Setting up ExternalDNS for an EKS cluster with Terraform

The purpose of this blog is to show how one can setup ExternalDNS on their EKS cluster to dynamically add DNS records to a Route 53 hosted zone when installing ingresses and services.

Contents

What is ExternalDNS?

IAM OIDC Identity Providers

Installing ExternalDNS on EKS with Terraform

Testing ExternalDNS on Your Kubernetes Cluster

What isExternalDNS?

ExternalDNS synchronises exposed Kubernetes Services and Ingresses with DNS providers such as AWS’s Route 53. This allows you to dynamically create DNS records on the fly when these services and ingresses are created.

To install ExternalDNS on our cluster we will first need to do two things:

  1. Create a Route 53 hosted zone which will contain the DNS records managed by ExternalDNS. To do this, please refer to the “Set up a hosted zone“ section here. Alternatively you can create a hosted zone by navigating to the Route 53 Dashboard in the AWS Management Console.

  2. Create an IAM OIDC Identity Provider (IdP) in AWS in order to establish a trust relationship between it and our EKS cluster.

IAM OIDC Identity Providers

Creating an IAM OIDC identity provider will allow identities outside of AWS, such as Kubernetes Service Accounts, to be able to assume roles with relevant permissions to use resources inside an AWS account. To create an IAM OIDC Identity provider you need 3 things:

  1. OIDC issuer URL

    EKS cluster's come with their own OIDC identity providers (if they don’t have one, one can be easily created).
    You can get the OIDC issuer URL for your cluster by using the instructions here. If your EKS cluster was created through Terraform, the URL can be obtained by using the outputs from your terraform EKS module or resource.

  2. Client ID

    The client id for your identity provider will be sts.amazonaws.com.

  3. Thumbprint

    The thumbprint is a signature for the CA's certificate that was used to issue the certificate for the OIDC-compatible Identity Provider” (AWS Docs). In this case the IdP is the IdP for your EKS cluster. Obtaining the thumbprint generally requires the use of external tools, and in our case we will use a tool called kubergrunt. If running terraform locally, you will need to install kubergrunt by downloading the binary from their releases page.

The Terraform file below details how to to create an IAM OIDC Identity Provider on AWS. This also assumes that your EKS cluster was created using a Terraform module.

data "external" "thumb" {
  program = ["kubergrunt", "eks", "oidc-thumbprint", "--issuer-url", module.eks.cluster_oidc_issuer_url]
}

resource "aws_iam_openid_connect_provider" "default" {
  url = module.eks.cluster_oidc_issuer_url 
  client_id_list = ["sts.amazonaws.com"]
  thumbprint_list = [data.external.thumb.result.thumbprint]
}

Installing ExternalDNS on EKS with Terraform

Once we’ve created an OIDC identity provider and a Route 53 hosted zone, we can install ExternalDNS on our cluster with the following Terraform file.

module "eks-external-dns" {
    source  = "lablabs/eks-external-dns/aws"
    version = "0.9.0"
    cluster_identity_oidc_issuer =  module.eks.cluster_oidc_issuer_url  
    cluster_identity_oidc_issuer_arn = aws_iam_openid_connect_provider.default.arn
    policy_allowed_zone_ids = [
        var.route_53_zone_id  # zone id of your hosted zone 
    ]
    settings = {
    "policy" = "sync" # syncs DNS records with ingress and services currently on the cluster.
  }
  depends_on = [module.eks]
}

If you want to know what the eks-external-dns module installs, please take a look at the “Deploy ExternalDNS” section here. This page describes how one can manually install ExternalDNS on an EKS cluster.

Testing ExternalDNS On Your Kubernetes Cluster

To test ExternalDNS, you can install an ingress or service (with some external-dns annotations) on your cluster. Under the annotations section of your ingress/service, add the following annotation.

annotations:
  external-dns.alpha.kubernetes.io/hostname: host.yourdomain    

yourdomain will be your hosted zone's domain name. The above annotation lets your specify the name of the DNS record that you want to add to your hosted zone.

If using an nginx ingress controller, please ensure the argument --publish-service=default/nginx-ingress-controller has been set on the nginx-ingress-controller container. If using the nginx-ingress Helm chart, this flag can be set with the controller.publishService.enabled configuration option. Without this argument or flag being set, in the case of a Helm chart, the DNS record will point to an internal IP address making your hostname inaccessible from the public internet. If the flag/argument is set, the newly created DNS record will be an alias record, that will point to the IP address of the ingress controller’s load balancer.

After installing an ingress/service with the above annotation to an EKS cluster, it will take around two minutes for the DNS record to show within your Route 53 hosted zone.

 

Back to Blog