Tags: aws, persistence, storage, efs, csi driver, aws-csi-driver
Kublr Cluster Configuration
The following cluster specification excerpt deploys a cluster on AWS and additionally creates an EFS and integrates it with the cluster using Amazon EFS CSI Driver.
Important notes:
- The following spec is not a full cluster specification, it only includes excerpts that have to be added to a full cluster specification in order to set up EFS integration;
- FileSystemPolicy provided as sample for more granular permissions for AWS EFS;
- The configuration depends on the cluster AZs used and the subnet configuration in the cluster, EFS mount targets must be created in each AZ in one of the cluster subnets available in that AZ. As a result the list of Mount Targets specified in the cluster spec should be updated according to the cluster node groups configuration changes;
- Note that .lcl TLD is used for cluster-local hosted zone. DO NOT use .local as it may be used by AWS.
AWS IAM Configuration for Route53:
For Kublr to be able to provision and manage clusters with additional AWS resources, the Kublr IAM user should have corresponding permissions.
In case of EFS the following permissions need to be added to Kublr IAM account in addition to standard policies described in Kublr documentation:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid" : "Stmt1PermissionForAllEFSActions",
"Effect": "Allow",
"Action": [
"elasticfilesystem:*",
"route53:*"
],
"Resource": "*"
}
]
}EFS Creation with AIM Policies:
spec:
locations:
- aws:
...
cidrBlocks:
masterPublic:
- ''
- 172.16.4.0/23
nodePublic:
- ''
- 172.16.32.0/20
- 172.16.48.0/20
cloudFormationExtras:
resources:
CustomEFS:
DependsOn:
- RoleMaster
- RoleNode
Properties:
FileSystemPolicy:
Statement:
- Action:
- elasticfilesystem:*
Effect: Allow
Principal:
AWS:
- Fn::GetAtt:
- RoleMaster
- Arn
- Fn::GetAtt:
- RoleNode
- Arn
- Action:
- elasticfilesystem:ClientRootAccess
- elasticfilesystem:ClientMount
Effect: Allow
Principal:
AWS: '*'
Version: '2012-10-17'
FileSystemTags:
- Key: Name
Value: test-efs
PerformanceMode: maxIO
Type: AWS::EFS::FileSystem
# One MountTarget resource must be created for each AZ used by the cluster
# Kublr uses AZ enumeration convention so that AZ "a" takes number 0,
# AZ "b" - 1, AZ "c" - 2 etc
# MountTarget for AZ "b"
CustomEFSMT1:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: { Ref: CustomEFS }
SecurityGroups: [ { "Fn::GetAtt": [ NewVpc, DefaultSecurityGroup ] } ]
# for a cluster with private nodes only "SubnetNodePrivate..."
# may have to be used
SubnetId: { Ref: SubnetNodePublic1 }
# MountTarget for AZ "c"
CustomEFSMT2:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: { Ref: CustomEFS }
SecurityGroups: [ { "Fn::GetAtt": [ NewVpc, DefaultSecurityGroup ] } ]
# for a cluster with private nodes only "SubnetNodePrivate..."
# may have to be used
SubnetId: { Ref: SubnetNodePublic2 }
CustomPrivateHostedZone:
Type: AWS::Route53::HostedZone
Properties:
Name: csi.kublr.lcl
VPCs:
- VPCId: { Ref: NewVpc }
VPCRegion: { Ref: AWS::Region }
CustomPrivateHostedZoneRecordSetEFS:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: { Ref: CustomPrivateHostedZone }
Name: efs.csi.kublr.lcl
ResourceRecords:
- { Fn::Sub: [ ${EFS}.efs.${AWS::Region}.amazonaws.com, { EFS: { Ref: CustomEFS } } ] }
TTL: '60'
Type: CNAME
Validate and update cluster specification, as a result you will have EFS with predefined IAM policies.
EFS CSI Driver package installation
According to documentation edit cluster specification once more time and add package Amazon EFS CSI Driver on it.
# packages section allows to specify additional helm packages to be
# deployed to the cluster
packages:
aws-efs-csi-driver:
chart:
name: aws-efs-csi-driver
url: https://github.com/kubernetes-sigs/aws-efs-csi-driver/releases/download/helm-chart-aws-efs-csi-driver-2.5.6/aws-efs-csi-driver-2.5.6.tgz
helmVersion: v3.10.2
namespace: kube-system
releaseName: aws-efs-csi-driver
values:
storageClasses:
- name: efs-sc
parameters:
directoryPerms: '750'
fileSystemId: PUT-HERE-EFS-ID
provisioningMode: efs-ap
volumeBindingMode: ImmediateHow to check deployment
Check current status of aws-csi-driver pods:
kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver"
Create PVC and Pod for testing:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim