TABLE OF CONTENTS
- Overview and preparations
- Step 1: Prerequisites
- Step 2: Create AWS S3 bucket
- Step 3: Change configuration for Master Nodes in Cluster specification
- Step 4: Add package "Velero" with values according to Velero Helm chart
- Step 5: Install Velero - client and check Velero functionality
Overview and preparations
Velero provide a wide range of features from simple backup and restore to disaster recovery and cluster migration. It is a flexible and pretty universal tool for creation backups and restore whole kubernetes clusters with persistence storage.
Straight forward instruction from Velero side required additional user with wide range permissions. Here we are figure out how to do all necessary steps for Velero install with current cluster permissions and policies throughout Kublr Cluster specification.
No need to create IAM user, this method allow use cluster's permissions and policies of AWS for Velero
Step 1: Prerequisites
Kublr 1.24+
Access to AWS through CLI or UI for S3 bucket creation
Permissions for edit / applying Cluster specification in KCP
Possibility to run in Terminal app kubectl and velero commands
Step 2: Create AWS S3 bucket
Create S3 bucket in AWS via cli or UI interface. Velero requires an object storage bucket to store backups in, preferably unique to a single Kubernetes cluster. Create an S3 bucket, replacing placeholders appropriately:
aws s3api create-bucket \
--bucket $AWS_BUCKET_NAME \
--region $AWS_REGION_NAMEStep 3: Change configuration for Master Nodes in the Cluster specification. In this case Velero will be installed on Master Nodes, please ensure that the master nodes have enough resources and make sure that you monitor them. That will allow managing and extending Worker Nodes seamlessly:
locations:
- aws:
...
cloudFormationExtras:
iamRoleMaster:
Properties:
Policies:
- PolicyDocument:
Statement:
- Action:
- s3:GetObject
- s3:DeleteObject
- s3:PutObject
- s3:AbortMultipartUpload
- s3:ListMultipartUploadParts
Effect: Allow
Resource: 'arn:aws:s3:::<AWS_BUCKET_NAME>/*'
- Action:
- s3:ListBucket
Effect: Allow
Resource: 'arn:aws:s3:::<AWS_BUCKET_NAME>'
Version: '2012-10-17'
PolicyName: <AWS_POLICY_NAME>
...
name: aws1Step 4: Add in section "packages" "Velero" with values according to Velero Helm chart and Velero AWS Plugin:
packages:
velero:
chart:
name: velero
url: https://github.com/vmware-tanzu/helm-charts/releases/download/velero-4.0.2/velero-4.0.2.tgz
helmVersion: v3.11.1
namespace: velero
releaseName: velero
values:
backupsEnabled: true
configuration:
backupStorageLocation:
- accessMode: ReadWrite
bucket: <AWS_BUCKET_NAME>
config:
region: <AWS_REGION>
name: default
provider: aws
logFormat: json
logLevel: debug
namespace: velero
volumeSnapshotLocation:
- config:
region: <AWS_REGION>
name: default
provider: aws
credentials:
useSecret: false
dnsPolicy: ClusterFirst
initContainers:
- image: velero/velero-plugin-for-aws:v1.7.0
imagePullPolicy: IfNotPresent
name: velero-plugin-for-aws
volumeMounts:
- mountPath: /target
name: plugins
metrics:
enabled: true
scrapeInterval: 300s
scrapeTimeout: 60s
nodeSelector:
kublr.io/node-group: master
resources:
limits:
cpu: 1000m
memory: 512Mi
requests:
cpu: 500m
memory: 128Mi
snapshotsEnabled: true
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
upgradeCRDs: trueValidate and Update cluster specification with all this changes
Step 5: Install Velero client and check Velero functionality
Install Velero CLI according to Instruction for your platform
Step 6: Checking Velero functions
As example create Velero backup in Terminal App with command:
velero backup create whole-cluster-backup -n velero-backup
Sample of backup of infrastructure:
velero create backup k8s-infrastructure --exclude-namespaces default,kublr,velero,kubernetes-dashboard --exclude-resources certificates.cert-manager.io,certificaterequests.cert-manager.io,orders.acme.cert-manager.io,clusterissuers.cert-manager.io,ippools.crd.projectcalico.org
Look at created backup with command:
velero backup describe whole-cluster-backup -n velero-backup
As example delete namespace via kubectl and restore it via velero restore command:
kubectl delete namespaces kubernetes-dashboard velero restore create --from-backup whole-cluster-backup -n velero-backup