×

The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider, like Amazon Route 53, to Red Hat OpenShift Service on AWS (ROSA) clusters. In this tutorial, we will deploy and configure the External DNS Operator with a secondary ingress controller to manage DNS records in Amazon Route 53.

The External DNS Operator does not support STS using IAM Roles for Service Accounts (IRSA) and uses long-lived Identity Access Management (IAM) credentials instead. This tutorial will be updated when the Operator supports STS.

Prerequisites

  • A ROSA Classic cluster

    ROSA with HCP is not supported at this time.

  • A user account with cluster-admin privileges

  • The OpenShift CLI (oc)

  • The Amazon Web Services (AWS) CLI (aws)

  • A unique domain, such as apps.example.com

  • An Amazon Route 53 public hosted zone for the above domain

Setting up your environment

  1. Configure the following environment variables:

    $ export DOMAIN=<apps.example.com> (1)
    $ export AWS_PAGER=""
    $ export CLUSTER=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"  | sed 's/-[a-z0-9]\{5\}$//')
    $ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    $ export SCRATCH="/tmp/${CLUSTER}/external-dns"
    $ mkdir -p ${SCRATCH}
    1 Replace with the custom domain you want to use for the IngressController.
  2. Ensure all fields output correctly before moving to the next section:

    $ echo "Cluster: ${CLUSTER}, Region: ${REGION}, AWS Account ID: ${AWS_ACCOUNT_ID}"

    The "Cluster" output from the previous command may be the name of your cluster, the internal ID of your cluster, or the cluster’s domain prefix. If you prefer to use another identifier, you can manually set this value by running the following command:

    $ export CLUSTER=my-custom-value

Secondary ingress controller setup

Use the following procedure to deploy a secondary ingress controller using a custom domain.

Prerequisites
  • A unique domain, such as apps.example.com

  • A wildcard or SAN TLS certificate configured with the custom domain selected above (CN=*.apps.example.com)

Procedure
  1. Create a new TLS secret from a private key and a public certificate, where fullchain.pem is your full wildcard certificate chain (including any intermediaries) and privkey.pem is your wildcard certificate’s private key:

    $ oc -n openshift-ingress create secret tls external-dns-tls --cert=fullchain.pem --key=privkey.pem
  2. Create a new IngressController resource:

    $ cat << EOF | oc apply -f -
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: external-dns-ingress
      namespace: openshift-ingress-operator
    spec:
      domain: ${DOMAIN}
      defaultCertificate:
        name: external-dns-tls
      endpointPublishingStrategy:
        loadBalancer:
          dnsManagementPolicy: Unmanaged
          providerParameters:
            aws:
              type: NLB
            type: AWS
          scope: External
        type: LoadBalancerService
    EOF

    This IngressController example will create an internet accessible Network Load Balancer (NLB) in your AWS account. To provision an internal NLB instead, set the .spec.endpointPublishingStrategy.loadBalancer.scope parameter to Internal before creating the IngressController resource.

  3. Verify that your custom domain IngressController has successfully created an external load balancer:

    $ oc -n openshift-ingress get service/router-external-dns-ingress
    Example output
    NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)                      AGE
    router-external-dns-ingress   LoadBalancer   172.30.71.250   a4838bb991c6748439134ab89f132a43-aeae124077b50c01.elb.us-east-1.amazonaws.com   80:32227/TCP,443:30310/TCP   43s

Preparing your AWS account

  1. Retrieve the Amazon Route 53 public hosted zone ID:

    $ export ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json \
      --dns-name "${DOMAIN}." --query 'HostedZones[0]'.Id --out text | sed 's/\/hostedzone\///')
  2. Prepare a document with the necessary DNS changes to enable DNS resolution for the canonical domain of the Ingress Controller:

    $ NLB_HOST=$(oc -n openshift-ingress get service/router-external-dns-ingress -ojsonpath="{.status.loadBalancer.ingress[0].hostname}")
    $ cat << EOF > "${SCRATCH}/create-cname.json"
    {
      "Comment":"Add CNAME to ingress controller canonical domain",
      "Changes":[{
          "Action":"CREATE",
          "ResourceRecordSet":{
            "Name": "router-external-dns-ingress.${DOMAIN}",
          "Type":"CNAME",
          "TTL":30,
          "ResourceRecords":[{
            "Value": "${NLB_HOST}"
          }]
        }
      }]
    }
    EOF

    The External DNS Operator uses this canonical domain as the target for CNAME records.

  3. Submit your changes to Amazon Route 53 for propagation:

    aws route53 change-resource-record-sets \
      --hosted-zone-id ${ZONE_ID} \
      --change-batch file://${SCRATCH}/create-cname.json
  4. Create an AWS IAM Policy document that allows the External DNS Operator to update only the custom domain public hosted zone:

    $ cat << EOF > "${SCRATCH}/external-dns-policy.json"
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "route53:ChangeResourceRecordSets"
          ],
          "Resource": [
            "arn:aws:route53:::hostedzone/${ZONE_ID}"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
            "route53:ListHostedZones",
            "route53:ListResourceRecordSets"
          ],
          "Resource": [
            "*"
          ]
        }
      ]
    }
    EOF
  5. Create an AWS IAM user:

    $ aws iam create-user --user-name "${CLUSTER}-external-dns-operator"
  6. Attach the policy:

    $ aws iam attach-user-policy --user-name "${CLUSTER}-external-dns-operator" --policy-arn $POLICY_ARN

    This will be changed to STS using IRSA in the future.

  7. Create AWS keys for the IAM user:

    $ SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "${CLUSTER}-external-dns-operator")
  8. Create static credentials:

    $ cat << EOF > "${SCRATCH}/credentials"
    [default]
    aws_access_key_id = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
    aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
    EOF

Installing the External DNS Operator

  1. Create a new project:

    $ oc new-project external-dns-operator
  2. Install the External DNS Operator from OperatorHub:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: external-dns-group
      namespace: external-dns-operator
    spec:
      targetNamespaces:
      - external-dns-operator
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: external-dns-operator
      namespace: external-dns-operator
    spec:
      channel: stable-v1.1
      installPlanApproval: Automatic
      name: external-dns-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF
  3. Wait until the External DNS Operator is running:

    $ oc rollout status deploy external-dns-operator --timeout=300s
  4. Create a secret from the AWS IAM user credentials:

    $ oc -n external-dns-operator create secret generic external-dns \
      --from-file "${SCRATCH}/credentials"
  5. Deploy the ExternalDNS controller:

    $ cat << EOF | oc apply -f -
    apiVersion: externaldns.olm.openshift.io/v1beta1
    kind: ExternalDNS
    metadata:
      name: ${DOMAIN}
    spec:
      domains:
        - filterType: Include
          matchType: Exact
          name: ${DOMAIN}
      provider:
        aws:
          credentials:
            name: external-dns
        type: AWS
      source:
        openshiftRouteOptions:
          routerName: external-dns-ingress
        type: OpenShiftRoute
      zones:
        - ${ZONE_ID}
    EOF
  6. Wait until the controller is running:

    $ oc rollout status deploy external-dns-${DOMAIN} --timeout=300s

Deploying a sample application

Now that the ExternalDNS controller is running, you can deploy a sample application to confirm that the custom domain is configured and trusted when you expose a new route.

  1. Create a new project for your sample application:

    $ oc new-project hello-world
  2. Deploy a hello world application:

    $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
  3. Create a route for the application specifying your custom domain name:

    $ oc -n hello-world create route edge --service=hello-openshift hello-openshift-tls \
    --hostname hello-openshift.${DOMAIN}
  4. Check if the DNS record was created automatically by ExternalDNS:

    It can take a few minutes for the record to appear in Amazon Route 53.

    $ aws route53 list-resource-record-sets --hosted-zone-id ${ZONE_ID} \
       --query "ResourceRecordSets[?Type == 'CNAME']" | grep hello-openshift
  5. Optional: You can also view the TXT records that indicate they were created by ExternalDNS:

    $ aws route53 list-resource-record-sets --hosted-zone-id ${ZONE_ID} \
       --query "ResourceRecordSets[?Type == 'TXT']" | grep ${DOMAIN}
  6. Curl the newly created DNS record to your sample application to verify the hello world application is accessible:

    $ curl https://hello-openshift.${DOMAIN}
    Example output
    Hello OpenShift!