TABLE OF CONTENTS


Overview


Creating a Kubernetes cluster without worker nodes in Azure requires additional settings due to certain rules hard-coded in Kubernetes Azure Cloud Provider.


Kubernetes Azure Cloud Provider will not add nodes labeled with master role label to the backend pool for the LoadBalancer Services, and as a result such services will not be able to serve external traffic.


There are two way to avoid this issue:


1) use LegacyNodeRoleBehavior=false Kubernetes feature flag to enable adding masters to the LB backend


2) remove master role label from the master nodes


Both methods are described below.


Please review also the port use considerations section before making setting the cluster up. 


Port use considerations


In a standard cluster with separate master and worker nodes there are no port collisions between system ports (such as Kubernetes API port) and ports used by Kubernetes Services.

When there are no worker nodes and masters have to be used as workers though, due to the Azure networking and load balancing is set up, it is impossible to use the same port for Services as used by the Kubernetes API.


By default Kubernetes API is configured to use port 443 (older versions) or 6443 (newer version), so if you are planning to use services with the same port number, change Kubernetes API port to some other value that is unlikely to be used by Kubernetes Services of the type LoadBalancer.


It is also recommended to avoid using the ports in the Kubernetes NodePort range for master-only clusters (32000-... by default).


Follow this guide to update Kubernetes API port number if necessary.


An alternative solution to port conflicts on the master nodes is using Floating IP for Kubernetes API load balancer rules. This allows reusing ports without limitations. The solution is described in the support article "Azure: Using Floating IP for Kubernetes API LoadBalancers"


Removing taints


By default master nodes in Kublr Kubernetes clusters are tainted with node-role.kubernetes.io/master taint to only allow running system components on the cluster.


For master-only clusters these taints need to be removed to allow running user applications on master nodes. Both methods described below include taint removal in the cluster spec snippets, so if new clusters are created using these snipets, they will be created without the taints.


If you are updating an existing cluster, the taints will not be removed automatically on the cluster spec update. Use the following command to remove them:


kubectl taint nodes --all node-role.kubernetes.io/master-


Method 1: use LegacyNodeRoleBehavior=false feature flag

Set the feature flag on Kubernetes components via Kublr cluster specification as follows:

spec:
  kublrAgentConfig:
    kublr:
      kubelet_config:
        featureGates:
          LegacyNodeRoleBehavior: false
      kubelet_flag:
        feature_gates:
          flag: '--feature-gates='
          values:
            legacynoderolebehavior:
              value: 'LegacyNodeRoleBehavior=false'
      kube_proxy_flag:
        feature_gates:
          flag: '--feature-gates='
          values:
            legacynoderolebehavior:
              value: 'LegacyNodeRoleBehavior=false'
  master:
    kublrAgentConfig:
      taints:
        node_role_kubernetes_io_master: ''
      kublr:
        kube_api_server_flag:
          feature_gates:
            flag: '--feature-gates='
            values:
              legacynoderolebehavior:
                value: 'LegacyNodeRoleBehavior=false'
        kube_scheduler_flag:
          feature_gates:
            flag: '--feature-gates='
            values:
              legacynoderolebehavior:
                value: 'LegacyNodeRoleBehavior=false'
        kube_controller_manager_flag:
          feature_gates:
            flag: '--feature-gates='
            values:
              legacynoderolebehavior:
                value: 'LegacyNodeRoleBehavior=false'


Method 2: remove master role and service node exclusion labels from the master nodes

Kublr deploys a static pod that refreshes the master role label regularly, so this pod needs to be disabled in the cluster for the label to be removed. Modify the cluster spec as follows:

spec:
  kublrAgentConfig:
    taints:
      node_role_kubernetes_io_master: ''
    labels:
      node_kubernetes_io_exclude_from_external_load_balancers: ''
    extensions:
      templates_label_master_manifest:
        content: '# empty'
        path: templates/manifests-master/label-master.manifest


After the re-labeler is disabled, the label can be removed from the nodes by the following kubectl command:

kubectl label nodes -l node-role.kubernetes.io/master node-role.kubernetes.io/master-
kubectl label nodes -l node.kubernetes.io/exclude-from-external-load-balancers node.kubernetes.io/exclude-from-external-load-balancers-