By default Kublr includes standard Kubernetes AWS integration via Kubernetes AWS cloud provider and Nginx ingress controller, which allow exposing Kubernetes Ingress and Service objects via AWS Elastic and Network LoadBalancers (ELB and NLB).

Deploying additionally an AWS LoadBalancer Controller enables users managing AWS Application LoadBalancers (ALB) via Kubernetes Ingress rules and add new options for AWS Network LoadBalancers (NLB) managements via Kubernetes Service objects.

AWS LoadBalancer Controller generic official documentation may be found here.

This article describes specific procedures to deploy and use AWS LoadBalancer Controller on an AWS Kubernetes cluster provisioned by Kublr.

Prepare AWS Account

AWS LoadBalancer Controller will need additional permissions to manage ALB in the AWS account where the Kublr Kubernetes cluster is provisioned.

Create a managed IAM policy with the required permissions in the AWS account according to the steps 2 and 3 in the controller documentation.

# download the policy from the controller github

curl -o iam-policy.json \

# create a new managed policy in your AWS account

aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam-policy.json

Prepare Kublr Kubernetes Cluster

1. Make sure that the cluster uses at least two availability zones for the worker nodes, AWS ALB requires that.

2. Adjust the Kublr cluster specification as follows to enable the IAM policy on the master and worker nodes:

    - aws:
              - { 'Fn::Sub' : 'arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/AWSLoadBalancerControllerIAMPolicy' }
              - { 'Fn::Sub' : 'arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/AWSLoadBalancerControllerIAMPolicy' }

Deploy AWS LoadBalancer Controller

The deployment instructions are based on the Helm-based AWS LoadBalancer Controller deployment as described in the docs here and here.

1. Install the TargetGroupBinding CRDs

kubectl apply -k ""

2. Install the Controller helm chart (note that the <k8s-cluster-name> in the command below should be replaced with name of your k8s cluster before running it)

# NOTE: The clusterName value must be set either via the values.yaml or the Helm command line.
# The <k8s-cluster-name> in the command below should be replaced with name of your k8s cluster before running it.

helm upgrade -i \
  -n kube-system \
  aws-load-balancer-controller \ \
  --set clusterName=<k8s-cluster-name>

Test AWS LoadBalancer Controller

You can test that AWS LoadBalancer Controller works as expected by following an echoserver example.

1. Deploy echoserver resources

kubectl apply -f

kubectl apply -f

kubectl apply -f

2. List echoserver resources to make sure they are created

kubectl get -n echoserver deploy,svc

The output similar to the following is expected

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/echoserver   1/1     1            1           134m

NAME                 TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/echoserver   NodePort   <none>        80:32057/TCP   135m

3. Deploy a test Ingress resource for echoserver

kubectl apply -f - <<EOF
kind: Ingress
  name: echoserver
  namespace: echoserver
  annotations: internet-facing Environment=dev,Team=test
  ingressClassName: alb
    - http:
          - path: /
            pathType: Exact
                name: echoserver
                  number: 80

4. Review the AWS LoadBalancer Controller logs and verify that the ALB is allocated without issues:

kubectl -n kube-system logs --tail 30 \
  $(kubectl -n kube-system get pods \
      -l \
      -o name | tail -n 1)

5. Check the ingress rule status (provisioning ALB may require a couple of minutes, so the address may not be immediately available)

kubectl -n echoserver get ingress echoserver

6. Try accessing the test endpoint:

curl -v \
  $(kubectl -n echoserver get ingress echoserver -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/echo