Kublr 1.20 introduced a number of improvements in Azure clusters infrastructure architecture that were not directly compatible with previously used architecture. To maintain compatibility and enable smooth migration from Kublr 1.19 to Kublr 1.20 the pre-1.20 architecture is supported through specific combination of parameters in the cluster specification as described in the documentation legacy pre-Kublr-1.20 clusters.


Migration of existing pre-1.20 cluster specifications to Kublr 1.20 format is performed automatically on the first cluster update in Kublr 1.20. It is still recommended to migrate a pre-1.20 Kublr Azure Kubernetes cluster to a new Azure architecture as soon as convenient using the procedure described below.


The upgrade procedure requires the cluster downtime, so plan accordingly.


1. Verify that the cluster is up, running, and healthy, and that cluster features are upgraded to the current version current.


2. Prepare applications running in the cluster for downtime - backup, shutdown, etc - as necessary.


3. Delete the following Azure resources (whichever are available) in Azure portal or using Azure CLI tool az:

  • ${cluster-name}-LoadBalancer-public - Load balancer
  • ${cluster-name}-LoadBalancer-private - Load balancer
  • ${cluster-name} - Load balancer
  • ${cluster-name}-internal - Load balancer
  • ${cluster-name}-agent-availabilitySet - Availability set
  • ${cluster-name}-agent-* - Virtual machine
  • ${cluster-name}-agent-*-osDisk - Disk
  • ${cluster-name}-agentNic-* - Network interface
  • ${cluster-name}-master-availabilitySet - Availability set
  • ${cluster-name}-master* - Virtual machine
  • ${cluster-name}-master*-osDisk - Disk
  • ${cluster-name}-masterNic* - Network interface


4. Be carefule and make sure that you DO NOT REMOVE

  • Any other managed disks, especially disks with the names pvc-* and ${cluster-name}-master*-dataDisk
  • Any public IP addresses
  • Any storage accounts


5. Update the cluster spec as follows:


Change properties related to the legacy Azure architecture

spec:

  locations:
    - azure:
        loadBalancerSKU: Standard

      # kublrAgentConfig:
      #   kublr_cloud_provider:
      #     azure:
      #       vm_type: '' # make sure that this property is not set

      enableMasterSSH: false # you may keep it true for troubleshooting, or set to false to disable

  master:
    locations:
      - azure:

          groupType: AvailabilitySet

          masterLBSeparate: false

  nodes:
    - locations:
        - azure:

            groupType: AvailabilitySet

Update the Kublr seeder and agent version to the latest patch releases available in the KCP. For example if the migrated cluster uses Kubernetes and Kublr agent 1.17 and the latest 1.17 agent version in the new KCP is 1.17.17-5, then specify the following seeder and agent URLs:

spec:
  kublrVersion: 1.17.17-5
  kublrSeederTgzUrl: 'https://repo.kublr.com/repository/gobinaries/kublr/1.17.17-5/kublr-1.17.17-5-linux.tar.gz'
  kublrAgentTgzUrl: 'https://repo.kublr.com/repository/gobinaries/kublr/1.17.17-5/kublr-1.17.17-5-linux.tar.gz'

Make sure that sshKey property is removed from master and worker node groups specs, and only sshKeySecretRef is present:

spec:
  master:
    locations:
      - azure:
          sshKeySecretRef: my-ssh-key
          # sshKey: ...
  nodes:
    - locations:
        - azure:
            sshKeySecretRef: my-ssh-key
            # sshKey: ...


6. Validate the spec before submitting, and if there are no warnings, submit for update.


7. Wait for the cluster update to complete in Kublr


8. Upgrade Azure public IP addresses to Standard SKU in Azure portal or using Azure CLI tool az