TABLE OF CONTENTS
- Overview
- Compatibiliy
- Prerequisites
- How to deploy Kublr with Cilium
- Example Cluster Specification for Microsoft Azure AKS
- Confirm and verify Cilium installation
- Relevant documentation
Overview
What is Cilium?
Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes.
- At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration.
Kublr and Cilium
Kublr supports out of box different custom Container Network Interface (CNI) plugins such as Calico, Flannel, Canal, Weave but it's also easily possible to integrate any networking solution due to greatest flexibily provided by Kublr platform.
In this article you will understand how to deploy Kublr Platform or Kublr Cluster with Cilium as a custom CNI plugin in common cloud providers Amazon Web Services, Google Cloud Platform and Microsoft Azure, in order to utilize all features provided by Cilium.
Compatibility
Kublr supports Cilium as a custom CNI from versions 1.27.0-alpha.0
Prerequisites
In order to deploy Kublr platform or Kublr Cluster with Cilium as a custom CNI, you need to ensure that following conditions are met before actual deployment:
- You have enough access rights to create platforms and clusters in Kublr Bootstrapper.
- You have configured credentials for required Cloud Provider in Kublr Bootstrapper
How to deploy Kublr with Cilium
1. In Kublr Bootstrapper UI, navigate to "Clusters" section and select + to start adding new Kublr Platform or Kublr cluster:
2. Under "Advanced options" section, make sure you have selected "cni" as CNI Provider - If this value is used as the network provider value, Kublr will deploy Kubernetes cluster ready for installation of a CNI network provider but will not install any:
3. Use "Customize specification" to add required specific Cilium configuration as described below.
Cluster Specification customization
Kublr needs to tolerate several taints in order to run in a Kublr Platform or Kublr Cluster and setup Cilium CNI.
AWS specific steps
In order to use Cilium in Kublr at Amazon Web Services, managed nodegroups should be tained with node.cilium.io/agent-not-ready=true:NoExecute
to ensure application pods will only be scheduled once Cilium is ready to manage them.
Also, AWS specific settings should be used for Cilium Helm installation.
The following customization can be used in cluster specification to use Cilium with inter-node encryption provided by WireGuard and Hubble UI as an observability feature:
** You need to replace line <REPLACE_WITH_ACTUAL_CLUSTER_NAME> with real platform name selected on creating platform/cluster step **
spec: features: kublrOperator: chart: version: 1.2XXX enabled: true values: tolerations: - key: "node.kubernetes.io/not-ready" operator: "Exists" effect: "NoSchedule" - effect: "NoSchedule" key: "node.cloudprovider.kubernetes.io/uninitialized" operator: "Equal" value: "true" - effect: "NoExecute" key: "node.cilium.io/agent-not-ready" operator: "Equal" value: "true" kublrAgentConfig: taints: node_cilium_agent_not_ready_taint1: 'node.cilium.io/agent-not-ready=true:NoExecute' packages: cilium: chart: name: cilium repoUrl: https://helm.cilium.io/ version: 1.14.2 helmVersion: v3.12.3 releaseName: cilium namespace: kube-system values: kubeProxyReplacement: "true" encryption: enabled: true type: wireguard nodeEncryption: true cluster: id: 0 name: <REPLACE_WITH_ACTUAL_CLUSTER_NAME> hubble: dashboards: enabled: true label: grafana_dashboard labelValue: '1' namespace: kublr enabled: true ui: enabled: true relay: enabled: true nodeinit: enabled: true operator: replicas: 1 tunnel: vxlan
Please refer to Cilium installation documentation for available options.
GCP specific steps
Nodes should be tained with node.cilium.io/agent-not-ready=true:NoExecute
to ensure application pods will only be scheduled once Cilium is ready to manage them
As for AWS, Helm values should be specified in cluster specification to use Cilium with inter-node encryption provided by WireGuard and Hubble UI as an observability feature as shown below:
** You need to replace line <REPLACE_WITH_ACTUAL_CLUSTER_NAME> with real platform name selected on creating platform/cluster step **
spec: features: kublrOperator: chart: version: 1.2XXX enabled: true values: tolerations: - key: "node.kubernetes.io/not-ready" operator: "Exists" effect: "NoSchedule" - key: "node.kubernetes.io/network-unavailable" operator: "Exists" effect: "NoSchedule" - effect: "NoExecute" key: "node.cilium.io/agent-not-ready" operator: "Equal" value: "true" kublrAgentConfig: taints: node_cilium_agent_not_ready_taint1: 'node.cilium.io/agent-not-ready=true:NoExecute' packages: cilium: chart: name: cilium repoUrl: https://helm.cilium.io/ version: 1.14.2 helmVersion: v3.12.3 releaseName: cilium namespace: kube-system values: kubeProxyReplacement: "true" encryption: enabled: true type: wireguard nodeEncryption: true cluster: id: 0 name: <REPLACE_WTH_ACTUAL_CLUSTER_NAME> hubble: dashboards: enabled: true label: grafana_dashboard labelValue: '1' namespace: kublr enabled: true ui: enabled: true relay: enabled: true nodeinit: enabled: true operator: replicas: 1 tunnel: vxlan
Please refer to Cilium installation documentation for available options.
Microsoft Azure specific steps
Microsoft Azure doesn't depend on any specific nodes configuration, so the only required customization in Cluster Specification is Cilium Helm installation:
** You need to replace line <REPLACE_WITH_ACTUAL_CLUSTER_NAME> with real platform name selected on creating platform/cluster step **
... spec: features: kublrOperator: chart: version: 1.2XXX enabled: true values: tolerations: - key: "node.kubernetes.io/not-ready" operator: "Exists" effect: "NoSchedule" packages: cilium: chart: name: cilium repoUrl: https://helm.cilium.io/ version: 1.14.2 helmVersion: v3.12.3 releaseName: cilium namespace: kube-system values: aksbyocni: enabled: true kubeProxyReplacement: "true" encryption: enabled: true type: wireguard nodeEncryption: true cluster: id: 0 name: <REPLACE_TO_ACTUAL_CLUSTER_NAME> hubble: dashboards: enabled: true label: grafana_dashboard labelValue: '1' namespace: kublr enabled: true ui: enabled: true relay: enabled: true nodeinit: enabled: true operator: replicas: 1 tunnel: vxlan ...
Please refer to Cilium installation documentation for available options.
Confirm and verify Cilium installation
In order to verify Cilium installation, perform the following actions:
1. Connect to the Kublr Platform or Kublr Cluster with kubectl and check cilium pods status - all pods should in the status "Running"
kubectl -n kube-system get pods -l k8s-app=cilium
2. Using cilium command line tool, perform the following checks:
> cilium status /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \__/¯¯\__/ Hubble Relay: disabled \__/ ClusterMesh: disabled Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 7, Ready: 7/7, Available: 7/7 Containers: cilium Running: 7 cilium-operator Running: 1 Cluster Pods: 43/43 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.13.4@sha256:bde8800d61aaad8b8451b10e247ac7bdeb7af187bb698f83d40ad75a38c1ee6b: 7 cilium-operator quay.io/cilium/operator-generic:v1.13.4@sha256:09ab77d324ef4d31f7d341f97ec5a2a4860910076046d57a2d61494d426c6301: 1
3. Also, it is possible to execute full connectivity test using cilium command line tool
> cilium connectivity test
4.Use the following command to get extended information about Cilium status
> kubectl -n kube-system exec ds/cilium -- cilium status --verbose
Relevant documentation
The following resources should be used to get all additional information about Cilium CNI
[1] Kubernetes Network Plugins
[2] Cilium Installation using Helm
[3] Cilium - Considerations on Node Pool Taints and Unmanaged Pods