Tags: security, aws


TABLE OF CONTENTS


1. Setup K8S Cluster and AWS


NB! This is only supported for Kublr agents versions released with Kublr 1.27.0 and later.

This guide can be used with earlier Kublr Control Plane version, but only with newer versions of agents registered.


1.1. Kublr K8S Cluster Configuration


AWS IAM imposes the following requirements on the K8S clusters:

  1. K8S API must be exposed on port 443.
    As Kublr uses port 6443 for K8S API by default, the API port needs to be set to 443 in the cluster specification.
  2. K8S API must publish OIDC issuer data on an unauthenticated endpoint.
    Kublr disables unauthenticated access to K8S API by default, so it needs to be enabled in the cluster spec.
  3. K8S API OIDC issuer configuration includes an OIDC audience that corresponds to a public cluster endpoint.
  4. An EKS IAM webhook instance must run in the cluster.
  5. Certmanager instance must run in the cluster (EKS IAM webhook requires certificates management).


Certmanager is included in the Kublr ingress controller package, so make sure that you either enable ingress controller in the features screen when deploying a cluster, or deploy the certmanager separately.


All other requirements can be configured as follows in the Kublr cluster specification.

Note that the following only represents changes that need to be included in the cluster specification, not a full cluster spec.


spec:

  # enable certmanager
  features:
    ingress:
      ingressControllers:
        - nginx:
            enabled: true

  # K8S API port must be 443
  network:
    apiServerSecurePort: 443

  # Configure K8S API server OIDC IDP for AWS IAM integration
  kublrAgentConfig:
    kublr:
      kube_api_server_flag:
        # anonymous authentication must be enabled
        anonymous_auth: '--anonymous-auth=true'
        api_audiences:
          values:
            public_endpoint:
              value: 'https://${EIPmaster0}'
              order: '005'
        _service_account_issuer: '--service-account-issuer=https://${EIPmaster0}'
        external_hostname: '--external-hostname=${EIPmaster0}'

  packages:

    # enable unauthorized access to public IdP data
    unauthorized-idp-reviewer:
      chart:
        url: https://github.com/dysnix/charts/releases/download/raw-v0.3.2/raw-v0.3.2.tgz
      helmVersion: v3.7.2
      namespace: kube-system
      releaseName: unauthorized-idp-reviewer
      values:
        resources:
          - apiVersion: rbac.authorization.k8s.io/v1
            kind: ClusterRoleBinding
            metadata:
              name: oidc-reviewer
            roleRef:
              apiGroup: rbac.authorization.k8s.io
              kind: ClusterRole
              name: system:service-account-issuer-discovery
            subjects:
              - apiGroup: rbac.authorization.k8s.io
                kind: Group
                name: system:unauthenticated

    # deploy AWS IAM identity webhook
    aws-iam-identity-webhook:
      chart:
        url: https://github.com/jkroepke/helm-charts/releases/download/amazon-eks-pod-identity-webhook-2.1.1/amazon-eks-pod-identity-webhook-2.1.1.tgz
      helmVersion: v3.7.2
      namespace: aws-iam-identity-webhook
      releaseName: aws-iam-identity-webhook
      values:
        config:
          defaultAwsRegion: us-east-1 # set to the cluster region here


Note: Depending on the cluster configuration (single- or multi-master, ELB, NLB and EIP allocation policies, AWS or other cloud) it may be necessary to use ${KublrNLBPublic}, ${KublrNLBPrivate}, ${KublrELBPublic}, or ${KublrELBPrivate} or other value instead of ${EIPmaster0}.

See Kublr documentation on the topic of public Kubernetes API endpoints configuration for more details.


Default settings for single-master AWS cluster result in ${EIPmaster0} endpoint, while multi-master AWS cluster is set up with ${KublrNLBPublic} endpoint by default.


1.2. Register the cluster API in AWS IAM as an OIDC IdP


After the cluster is up and running, it needs to be registered in AWS IAM as an OIDC IdP.


This can be done via the following scripts:


# use the cluster K8S API endpoint address from Kublr UI
# for example: export K8S_API_ADDRESS=52.216.24.22
export K8S_API_ADDRESS=...

# check K8S API cert is available
echo | openssl s_client -servername $K8S_API_ADDRESS -showcerts -connect $K8S_API_ADDRESS:443

# get K8S API cert fingerprint
export K8S_API_FINGERPRINT="$(echo |
  openssl s_client -servername $K8S_API_ADDRESS -showcerts -connect $K8S_API_ADDRESS:443 2>/dev/null |
  openssl x509 -inform pem -noout -fingerprint -sha1 2>/dev/null |
  grep Fingerprint= | grep -o '\([0-9a-fA-F]\{2\}:\)\{19\}[0-9a-fA-F]\{2\}' | tr -d ':' )"

# check the fingerprint; it should print something like "2879F24618769DB2095E556C1C84DCE07F112221"
echo $K8S_API_FINGERPRINT

# register K8S API in AWS IAM
aws iam create-open-id-connect-provider \
  --url https://$K8S_API_ADDRESS \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list $K8S_API_FINGERPRINT

# verify that K8S API is registered in AWS IAM
aws iam list-open-id-connect-providers


1.3. (Optional) Integrate the cluster with AWS Secrets Manager


K8S can be integrated with AWS Secret Manager via a secret store CSI driver.

To enable this feature, include two additional plugins in the cluster spec packages section.


spec:
  packages:

    csi-secrets-store:
      chart:
        url: https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts/secrets-store-csi-driver-1.3.4.tgz
      helmVersion: v3.7.2
      namespace: kube-system
      releaseName: csi-secrets-store

    secrets-provider-aws:
      chart:
        url: https://github.com/aws/secrets-store-csi-driver-provider-aws/releases/download/secrets-store-csi-driver-provider-aws-0.3.4/secrets-store-csi-driver-provider-aws-0.3.4.tgz
      helmVersion: v3.7.2
      namespace: kube-system
      releaseName: secrets-provider-aws


2. Use K8S AWS IAM integration


2.1. Simple SA and role test


1. Create a service account in the K8S cluster:


export namespace=default
export service_account=my-service-account

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: $namespace
  name: $service_account
EOF


2. Create AWS IAM role that the service account will assume and associate it with required AWS IAM Policy.

Note! In this account it is associated with a built-in AdministratorAccess policy.

Use more restrictive policy in production!


export account_id=$(aws sts get-caller-identity --query "Account" --output text)
export oidc_provider=$K8S_API_ADDRESS

cat >trust-relationship.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "$oidc_provider:aud": "sts.amazonaws.com",
          "$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
        }
      }
    }
  ]
}
EOF

aws iam create-role \
  --role-name role-for-my-service-account-in-cluster-aws-integration \
  --assume-role-policy-document file://trust-relationship.json \
  --description "test role for default/my-service-account SA in Kublr K8S cluster cluster-aws-integration"

aws iam attach-role-policy \
  --role-name role-for-my-service-account-in-cluster-aws-integration \
  --policy-arn=arn:aws:iam::aws:policy/AdministratorAccess

3. Setup the service account for assuming the AWS IAM role


kubectl annotate serviceaccount -n $namespace $service_account \
  eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/role-for-my-service-account-in-cluster-aws-integration


4. Run pod associated with the service account


kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      serviceAccountName: my-service-account
      containers:
      - name: my-app
        image: amazon/aws-cli
        command: [sleep, '36000']
EOF


5. Test AWS IAM access from the pod


# exec into the pod
kubectl exec -it $(kubectl get pods -l app=my-app -o name) -- bash

# Check AWS identity associated with the pod.
# The command should print an ARN referring to the assumed AWS IAM role created above
aws sts get-caller-identity


2.2. AWS Secrets Manager integration test


Make sure that K8S integration with AWS Secrets Manager is enabled in the cluster as described in the section 1.3 above.


1. Create a test secret in AWS Secrets Manager


aws secretsmanager create-secret --name tst/tst \
  --secret-string '{"username":"tst-user","password":"tst-password"}'


2. Create a K8S SecretProviderClass associated with the secret


export account_id=$(aws sts get-caller-identity --query "Account" --output text)

kubectl apply -f - <<EOF
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secrets
spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "arn:aws:secretsmanager:us-east-1:$account_id:secret:tst/tst"
        jmesPath:
          - path: username
            objectAlias: username
          - path: password
            objectAlias: password
EOF


3. Run pod with the secret mounted


kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-secret
  labels:
    app: my-app-secret
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app-secret
  template:
    metadata:
      labels:
        app: my-app-secret
    spec:
      serviceAccountName: my-service-account
      volumes:
      - name: secrets-store-inline
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: "aws-secrets"
      containers:
      - name: my-app-secret
        image: amazon/aws-cli
        command: [sleep, '36000']
        volumeMounts:
        - name: secrets-store-inline
          mountPath: "/mnt/secrets-store"
          readOnly: true
EOF


4. Verify that secret data is available in the pod


# exec into the pod
kubectl exec -it $(kubectl get pods -l app=my-app-secret -o name) -- bash

# Check secret data
ls -la /mnt/secrets-store
echo "$(cat /mnt/secrets-store/username)"
echo "$(cat /mnt/secrets-store/password)"
echo "$(cat /mnt/secrets-store/arn*)"


3. Useful commands for troubleshooting


1. Check public access to the cluster IdP OIDC endpoint


export KUBECONFIG=...
export K8S_API_ADDRESS=...

kubectl get --raw=/.well-known/openid-configuration

kubectl get --raw=/openid/v1/jwks

curl -k https://$K8S_API_ADDRESS/.well-known/openid-configuration

curl -k https://$K8S_API_ADDRESS/openid/v1/jwks


4. References