vSphere CSI Driver - Deployment with Topology
- Set Up Zones in the vSphere CNS Environment
- Enable Zones for the vSphere CPI and CSI Driver
- Deploy Workloads Using Zones
When you deploy CPI and CSI in a vSphere environment that includes multiple data centers or host clusters, you can use zoning.
Zoning enables orchestration systems, like Kubernetes, to integrate with vSphere storage resources that are not equally available to all nodes. As a result, the orchestration system can make intelligent decisions when dynamically provisioning volumes, and avoid situations such as those where a pod cannot start because the storage resource it needs is not accessible.
Set Up Zones in the vSphere CNS Environment
Depending on your vSphere storage environment, you can use different deployment scenarios for zones. For example, you can have zones per host cluster, per data center, or have a combination of both.
In the following example, the vCenter Server environment includes three clusters with node VMs located on all three clusters.

The sample workflow creates zones per cluster and per data center.
Procedure
- Create Zones Using vSphere Tags
- You can use vSphere tags to label zones in your vSphere environment.
- Enable Zones for the CCM and CSI Driver
- Install the CCM and the CSI driver using the zone and region entries.
Create Zones Using vSphere Tags
You can use vSphere tags to label zones in your vSphere environment.
The task assumes that your vCenter Server environment includes three clusters, cluster1, cluster2, and cluster3, with the node VMs on all three clusters. In the task, you create two tag categories, k8s-zone and k8s-region. You tag the clusters as three zones, zone-a, zone-b, and zone-c, and mark the data center as a region, region-1.
Prerequisites
Make sure that you have appropriate tagging privileges that control your ability to work with tags. See vSphere Tagging Privileges in the vSphere Security documentation.
Note: Ancestors of node VMs, such as host, cluster, and data center, must have the ReadOnly role set for the vSphere user configured to use the CSI driver and CCM. This is required to allow reading tags and categories to prepare nodes' topology.
Procedure
In the vSphere Client, create two tag categories, k8s-zone and k8s-region.
For information, see Create, Edit, or Delete a Tag Category in the vCenter Server and Host Management documentation.
In each category, create appropriate zone tags.
For information on creating tags, see Create, Edit, or Delete a Tag in the vCenter Server and Host Management documentation.
Categories Tags k8s-zone zone-a
zone-b
zone-ck8s-region region-1 Apply corresponding tags to the data center and clusters as indicated in the table.
For information, see Assign or Remove a Tag in the vCenter Server and Host Management documentation.
vSphere Objects Tags datacenter region-1 cluster1 zone-a cluster2 zone-b cluster3 zone-c
Enable Zones for the vSphere CPI and CSI Driver
Install the vSphere CPI and the CSI driver using the zone and region entries.
Procedure
Install the vSphere CPI.
In the value fields of the configmap cloud-config file, specify region and zone. Make sure to add the names of categories you defined in vSphere, such as k8s-region and k8s-zone.
[Global] insecure-flag = "true" [VirtualCenter "vCenter Server IP address"] user = "user" password = "password" port = "443" datacenters = "datacenter" [Network] public-network = "VM Network" [Labels] region = k8s-region zone = k8s-zonecd /etc/kubernetes kubectl create configmap cloud-config --from-file=vsphere.conf --namespace=kube-system kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml kubectl apply -f https://github.com/kubernetes/cloud-provider-vsphere/raw/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yamlVerify that your CCM installation is successful.
After installation, labels
failure-domain.beta.kubernetes.io/regionandfailure-domain.beta.kubernetes.io/zoneare applied to all nodes.kubectl get nodes -L failure-domain.beta.kubernetes.io/zone -L failure-domain.beta.kubernetes.io/region NAME STATUS ROLES AGE VERSION ZONE REGION k8s-master Ready master 32m v1.14.2 zone-a region-1 k8s-node1 Ready <none> 18m v1.14.2 zone-a region-1 k8s-node2 Ready <none> 18m v1.14.2 zone-b region-1 k8s-node3 Ready <none> 18m v1.14.2 zone-b region-1 k8s-node4 Ready <none> 18m v1.14.2 zone-c region-1 k8s-node5 Ready <none> 18m v1.14.2 zone-c region-1Install the CSI driver.
Make sure
external-provisioneris deployed with the arguments--feature-gates=Topology=true.In the credential secret file, add entries for region and zone.
[Labels] region = k8s-region zone = k8s-zoneVerify that your CSI driver installation is successful.
kubectl get csinodes -o jsonpath='{range .items[*]}{.metadata.name} {.spec}{"\n"}{end}' k8s-node1 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node1 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node2 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node2 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node3 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node3 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node4 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node4 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node5 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:
Deploy Workloads Using Zones
With zones, you can deploy a Kubernetes workload to a specific region or zone.
Use the sample workflow to provision and verify your workloads.
Procedure
Create a StorageClass that defines zone and region mapping.
To the StorageClass YAML file, add zone-a and region-1 in the allowedTopologies field.
tee example-zone-sc.yaml >/dev/null <<'EOF' kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: example-vanilla-block-zone-sc provisioner: csi.vsphere.vmware.com allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - zone-a - key: failure-domain.beta.kubernetes.io/region values: - region-1 EOFkubectl create -f example-zone-sc.yaml storageclass.storage.k8s.io/example-vanilla-block-zone-sc createdCreate a PersistenceVolumeClaim.
tee example-zone-pvc.yaml >/dev/null <<'EOF' apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-vanilla-block-zone-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: example-vanilla-block-zone-sc EOFkubectl create -f example-zone-pvc.yaml persistentvolumeclaim/example-vanilla-block-zone-pvc createdVerify that a volume is created for the PersistentVolumeClaim.
kubectl get pvc example-vanilla-block-zone-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE example-vanilla-block-zone-pvc Bound pvc-5b340a9b-a990-11e9-b26e-005056a04307 5Gi RWO example-vanilla-block-zone-sc 58s kubectl get pvc example-vanilla-block-zone-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE example-vanilla-block-zone-pvc Bound pvc-5b340a9b-a990-11e9-b26e-005056a04307 5Gi RWO example-vanilla-block-zone-sc 91sVerify that the persistent volume is provisioned with the Node Affinity rules containing zone and region specified in the StorageClass.
kubectl describe pv pvc-5b340a9b-a990-11e9-b26e-005056a04307 Name: pvc-5b340a9b-a990-11e9-b26e-005056a04307 Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com Finalizers: [kubernetes.io/pv-protection] StorageClass: example-vanilla-block-zone-sc Status: Bound Claim: default/example-vanilla-block-zone-pvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 5Gi Node Affinity: Required Terms: Term 0: failure-domain.beta.kubernetes.io/zone in [zone-a] failure-domain.beta.kubernetes.io/region in [region-1] Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi.vsphere.vmware.com VolumeHandle: 8f1f5e44-fafa-4404-91f7-d7a9bfd30e16 ReadOnly: false VolumeAttributes: fstype= storage.kubernetes.io/csiProvisionerIdentity=1563472725085-8081-csi.vsphere.vmware.com type=vSphere CNS Block Volume Events: <none>Create a pod.
tee example-zone-pod.yaml >/dev/null <<'EOF' apiVersion: v1 kind: Pod metadata: name: example-vanilla-block-zone-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"] volumeMounts: - name: test-volume mountPath: /mnt/volume1 restartPolicy: Never volumes: - name: test-volume persistentVolumeClaim: claimName: example-vanilla-block-zone-pvc EOFkubectl create -f example-zone-pod.yaml pod/example-vanilla-block-zone-pod createdPod is scheduled on the node k8s-node1 which belongs to zone: "zone-a" and region: "region-1"
kubectl describe pod example-vanilla-block-zone-pod | egrep "Node:" Node: k8s-node1/10.160.78.255