TABLE OF CONTENTS


Overview


Creating a Kubernetes cluster without worker nodes in Azure requires additional settings due to certain rules hard-coded in Kubernetes Azure Cloud Provider.


Kubernetes Azure Cloud Provider will not add nodes labeled with master role label to the backend pool for the LoadBalancer Services, and as a result such services will not be able to serve external traffic.


There are two way to avoid this issue:


1) use LegacyNodeRoleBehavior=false Kubernetes feature flag to enable adding masters to the LB backend


2) remove master role label from the master nodes


Both methods are described below.


Please review also the port use considerations section before making setting the cluster up. 


Port use considerations


In a standard cluster with separate master and worker nodes there are no port collisions between system ports (such as Kubernetes API port) and ports used by Kubernetes Services.

When there are no worker nodes and masters have to be used as workers though, due to the Azure networking and load balancing is set up, it is impossible to use the same port for Services as used by the Kubernetes API.


By default Kubernetes API is configured to use port 443, so if you are planning to use services with the same port number, change Kubernetes API port to some other value that is unlikely to be use by Kubernetes Services of the type LoadBalancer.


It is also recommended to avoid using the ports in the Kubernetes NodePort range for master-only clusters (32000-... by default).


Follow this guide to update Kubernetes API port number if necessary.


Method 1: use LegacyNodeRoleBehavior=false feature flag

Set the feature flag on Kubernetes components via Kublr cluster specification as follows:

spec:
  kublrAgentConfig:
    kublr:
      kubelet_config:
        featureGates:
          LegacyNodeRoleBehavior: false
      kubelet_flag:
        feature_gates:
          flag: '--feature-gates='
          values:
            legacynoderolebehavior:
              value: 'LegacyNodeRoleBehavior=false'
      kube_proxy_flag:
        feature_gates:
          flag: '--feature-gates='
          values:
            legacynoderolebehavior:
              value: 'LegacyNodeRoleBehavior=false'
  master:
    kublrAgentConfig:
      taints:
        node_role_kubernetes_io_master: ''
      kublr:
        kube_api_server_flag:
          feature_gates:
            flag: '--feature-gates='
            values:
              legacynoderolebehavior:
                value: 'LegacyNodeRoleBehavior=false'
        kube_scheduler_flag:
          feature_gates:
            flag: '--feature-gates='
            values:
              legacynoderolebehavior:
                value: 'LegacyNodeRoleBehavior=false'
        kube_controller_manager_flag:
          feature_gates:
            flag: '--feature-gates='
            values:
              legacynoderolebehavior:
                value: 'LegacyNodeRoleBehavior=false'


Method 2: remove master role label from the master nodes

Kublr deploys a static pod that refreshes the master role label regularly, so this pod needs to be disabled in the cluster for the label to be removed. Modify the cluster spec as follows:

spec:
  kublrAgentConfig:
    extensions:
      templates_label_master_manifest:
        content: '# empty'
        path: templates/manifests-master/label-master.manifest


After the re-labeler is disabled, the label can be removed from the nodes by the following kubectl command:

kubectl label nodes -l node-role.kubernetes.io/master node-role.kubernetes.io/master-