Container storage interface (CSI) plugin
The Content Software for File CSI Plugin prerequisites, capabilities, deployment, and usage are described.
CSI plugin overview
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes.
The Content Software for File CSI Plugin provides the creation and configuration of persistent storage external to Kubernetes. CSI replaces plugins developed earlier in the Kubernetes evolution. It replaces the hostPath method to expose WekaFS mounts as Kubernetes volumes.
Interoperability
- CSI protocol: 1.0-1.2
- Kubernetes: 1.18-1.2
- WekaFS: 3.8 and up
- AppArmor is not supported yet
Prerequisites
The Prerequisites include:
- The privileged mode must be allowed on the Kubernetes cluster
- The following Kubernetes feature gates must be enabled: DevicePlugins, CSINodeInfo, CSIDriverRegistry, ExpandCSIVolumes (if not changed, they should be enabled by default)
- A Content Software for File cluster is installed and accessible from the Kubernetes worker nodes.
- The Content Software for File client is installed on the Kubernetes worker nodes.
- It is recommended to use a client which is part of the cluster rather than a stateless client.
- If the Kubernetes nodes are part of the Content Software for File cluster (converged mode on the servers), make sure the Content Software for File processes come up before kubelet.
- Filesystems are pre-configured on the Content Software for File system.
Capabilities
The capabilities listed in this section are catagorized as supported and unsupported capabilities.
Supported capabilities
Supported capabilities
- Static and dynamic volumes provisioning
- Mounting a volume as a WekaFS filesystem directory
- All volume access modes are supported: ReadWriteMany, ReadWriteOnce, and ReadOnlyMany
- Volume expansion
Unsupported capabilities
Snapshots
Deployment
The Content Software for File CSI Plugin deployment is performed using a daemon set.
Download
To obtain the CSI Plugin package, see your Hitachi representative.
Installation
From the downloaded location in the Kubernetes master node, run the following command to deploy the Content Software for File CSI Plugin as a DaemonsSet:
$ ./deploy/kubernetes-latest/deploy.shOn successful deployment, you will see the following output:
creating wekafsplugin 1 namespace 2 namespace/csi-wekafsplugin created 3 deploying wekafs components 4 ./deploy/kubernetes-latest/wekafs/csi-wekafs-plugin.yaml 5 using image: quay.io/k8scsi/csi-node-driver-registrar 6 using image: quay.io/weka.io/csi-wekafs:v0.0.2-25-g7d 7 using image: quay.io/k8scsi/livenessprobe:v1.1.0 8 using image: quay.io/k8scsi/csi-provisioner:v1.6.0 9 using image: quay.io/k8scsi/csi-attacher:v3.0.0-rc1 10 using image: quay.io/k8scsi/csi-resizer:v0.5.0 11 namespace/csi-wekafsplugin configured 12 csidriver.storage.k8s.io/wekafs.csi.k8s.io created 13 serviceaccount/csi-wekafsplugin created 14 clusterrole.rbac.authorization.k8s.io/csi-wekafsplugin-cluster-role cre 15 clusterrolebinding.rbac.authorization.k8s.io/csi-wekafsplugin-cluster-r 16 role.rbac.authorization.k8s.io/csi-wekafsplugin-role created 17 rolebinding.rbac.authorization.k8s.io/csi-wekafsplugin-role-binding cre 18 daemonset.apps/csi-wekafsplugin created 19 12:04:54 deployment completed successfully 20 12:04:54 2 plugin pods are running: 21 csi-wekafsplugin-dvdh2 6/6 Running 0 3h1m 22 csi-wekafsplugin-xh182 6/6 Running 0 3h1m
The number of running pods should be the same as the number of Kubernetes worker nodes. This can be inspected by running:
1 $ kubectl get pods -n csi-wekafsplugin 2 NAME READY STATUS RESTARTS AGE 3 csi-wekafsplugin-dvdh2 6/6 Running 0 3h2m 4 csi-wekafsplugin-xh182 6/6 Running 0 3h2m
Provision usage
The Content Software for File CSI Plugin supports both dynamic (persistent volume claim) and static (persistent volume) volume provisioning.
It is first required to define a storage class to use the CSI Plugin.
Storage class example
csi-wekafs/examples/dynamic/storageclass-wekafs-dir.yaml apiVersion: 1 storage.k8s.io/v1 2 kind: StorageClass 3 metadata: 4 name: storageclass-wekafs-dir 5 provisioner: csi.weka.io 6 reclaimPolicy: Delete 7 volumeBindingMode: Immediate 8 allowVolumeExpansion: true 9 parameters: 10 volumeType: dir/v1 11 filesystemName: podsFilesystem
Storage class parameters
Parameter | Description | Limitation |
filesystemName | The name of the Content Software for File filesystem to create directories in as Kubernetes volumes | The filesystem should exist in the cluster |
Apply the StorageClass and check it has been created successfully:
1 # apply the storageclass .yaml file 2 $ kubectl apply -f storageclass-wekafs-dir.yaml 3 storageclass.storage.k8s.io/storageclass-wekafs-dir created 4 5 # check the storageclass resource has been created 6 $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY 7 VOLU 8 storageclass-wekafs-dir csi.weka.io Delete Imme
It is possible to define multiple storage classes with different filesystems.
Dynamic provisioning
Using a similar storage class to the above, it is possible to define a persistent volume claim (PVC) for the pods.
Persistent volume claim example
csi-wekafs/examples/dynamic/pvc-wekafs-dir.yaml 1 apiVersion: v1 2 kind: PersistentVolumeClaim 3 metadata: 4 name: pvc-wekafs-dir 5 spec: 6 accessModes: 7 - ReadWriteMany 8 storageClassName: storageclass-wekafs-dir 9 volumeMode: Filesystem 10 resources: 11 requests: 12 storage: 1Gi
Persistent volume claim parameters
Parameter | Description | Limitation |
spec.accessModes | The volume access mode | ReadWriteMany, ReadWriteOnce, or ReadOnlyMany |
spec.storageClassName | The storage class to use to create the PVC | Must be an existing storage class |
spec.resources.requests.storage | A desired capacity for the volume | The capacity quota is not enforced but is stored on the filesystem directory extended attributed for future use |
Apply the PersistentVolumeClaim and check it has been created successfully:
# apply 1 the pvc .yaml file 2 $ kubectl apply -f pvc-wekafs-dir.yaml 3 persistentvolumeclaim/pvc-wekafs-dir created 4 5 # check the pvc resource has been created 6 $ kubectl get pvc 7 NAME STATUS VOLUME 8 pvc-wekafs-dir Bound pvc-d00ba0fe-04a0-4916-8fea-ddbbc8f43380
Static provisioning
The Kubernetes admin can prepare some persistent volumes in advance to be used by pods, they should be an existing directory, and can contain pre-populated data to be used by the PODs.
It can be a directory previously provisioned by the CSI or a pre-existing directory in WekaFS. To expose an existing directory in WekaFS using CSI, define a persistent volume, and link a persistent volume claim to this persistent volume.
Persistent volume example
csi-wekafs/examples/static/pv-wekafs-dir-static.yaml 1 apiVersion: v1 2 kind: PersistentVolume 3 metadata: 4 name: pv-wekafs-dir-static 5 spec: 6 storageClassName: storageclass-wekafs-dir 7 accessModes: 8 - ReadWriteMany 9 persistentVolumeReclaimPolicy: Retain 10 volumeMode: Filesystem 11 capacity: 12 storage: 1Gi 13 csi: 14 driver: csi.weka.io 15 # volumeHandle must be formatted as following: 16 # dir/v1/<FILE_SYSTEM_NAME>/<INNER_PATH_IN_FILESYSTEM> 17 # The path must exist, otherwise publish request will fail 18 volumeHandle: dir/v1/podsFilesystem/my-dir
Persistent volume parameters
Parameter | Description | Limitation |
spec.accessModes | The volumeaccess mode | ReadWriteMany, ReadWriteOnce, or ReadOnlyMany |
spec.storageClassName | The storage class to use to create the PV | Must be an existing storage class |
spec.capacity.storage | A desired capacity for the volume | The capacity quota is not enforced but is stored on the filesystem directory extended attributed for future use |
spec.csi.volumeHandle | A string specifying a previously created path | A string containing the volumeType (dir/v1) filesystem name, and the directory path. For example, dir/v1/podsFilesystem/my-dir Must be an existing filesystem and path |
Apply the PersistentVolume and check it has been created successfully:
1 the pv .yaml file 2 $ kubectl apply -f pv-wekafs-dir-static.yaml 3 persistentvolume/pv-wekafs-dir-static created 4 5 # check the pv resource has been created 6 $ kubectl get pv 7 NAME CAPACITY ACCESS MODES RE 8 pv-wekafs-dir-static 1Gi RWX Re
Now, bind a PVC to this specific PV, use the volumeName parameter under the PVC spec and provide it with the specific PV name.
Persistent volume claim for static provisioning example
csi-wekafs/examples/static/pvc-wekafs-dir-static.yaml 1 apiVersion: v1 2 kind: PersistentVolumeClaim 3 metadata: 4 name: pvc-wekafs-dir-static 5 spec: 6 accessModes: 7 - ReadWriteMany 8 storageClassName: storageclass-wekafs-dir 9 volumeName: pv-wekafs-dir-static 10 volumeMode: Filesystem 11 resources: 12 requests: 13 storage: 1Gi
Parameter | Description | Limitation |
spec.accessModes | The volume access mode | ReadWriteMany, ReadWriteOnce, or ReadOnlyMany |
spec.storageClassName | The storage class to use to create the PVC | Must be the same storage class as the PV requested to bind in spec.volumeName |
spec.resources.requests.storage | A desired capacity for the volume | The capacity quota is not enforced but is stored on the filesystem directory extended attributed for future use |
spec.volumeName | A name of a preconfigured persistent volume | Must be an existing PV name |
Apply the PersistentVolumeClaim and check it has been created successfully:
# apply 1 the pvc .yaml file 2 $ kubectl apply -f pvc-wekafs-dir-static.yaml 3 persistentvolumeclaim/pvc-wekafs-dir-static created 4 5 # check the pvc resource has been created 6 $ kubectl get pvc 7 NAME STATUS VOLUME CAPACITY ACCES 8 pvc-wekafs-dir-static Bound pv-wekafs-dir-static 1Gi RWX
The PV will change the status to Bound and state the relevant claim it is bounded to:
1 # check the pv resource has been created 2 $ kubectl get pv 3 NAME CAPACITY ACCESS MODES RE 4 pv-wekafs-dir-static 1Gi RWX Re
Launching an application using CSF as the POD's storage
Now that we have a storage class and a PVC in place, we can configure the Kubernetes pods to provision volumes using the Content Software for File system.
We'll take an example application that echos the current timestamp every 10 seconds, and provide it with the previously created pvc-wekafs-dir PVC.
Multiple pods can share a volume produced by the same PVC as long as the accessModes parameter is set to ReadWriteMany.
csi-wekafs/examples/dynamic/csi-app-on-dir.yaml 1 kind: Pod 2 apiVersion: v1 3 metadata: 4 name: my-csi-app 5 spec: 6 containers: 7 - name: my-frontend 8 image: busybox 9 volumeMounts: 10 - mountPath: "/data" 11 name: my-csi-volume 12 command: ["/bin/sh"] 13 args: ["-c", "while true; do echo `date` >> /data/temp.txt; sleep 14 volumes: 15 - name: my-csi-volume 16 persistentVolumeClaim: 17 claimName: pvc-wekafs-dir # defined in pvc-wekafs-dir.yaml
Now we will apply that pod:
1 $ kubectl apply -f csi-app-on-dir.yaml 2 pod/my-csi-app created
Kubernetes will allocate a persistent volume and attach it to the pod, it will use a directory within the WekaFS filesystem as defined in the storage class mentioned in the persistent volume claim. The pod will be in Running status, and the temp.txt file will get updated with occasional date information.
1 get pod my-csi-app 2 NAME READY STATUS RESTARTS AGE 3 my-csi-app 1/1 Running 0 85s 4 5 # if we go to a wekafs mount of this filesystem we can see a directory 6 $ ls -l /mnt/weka/podsFilesystem/csi-volumes 7 drwxr-x--- 1 root root 0 Jul 19 12:18 pvc-d00ba0fe-04a0-4916-8fea-ddbbc 8 9 # inside that directory, the temp.txt file from the running pod can be 10 $ cat /mnt/weka/podsFilesystem/csi-volumes/pvc-d00ba0fe-04a0-4916-8fea 11 Sun Jul 19 12:50:25 IDT 2020 12 Sun Jul 19 12:50:35 IDT 2020 13 Sun Jul 19 12:50:45 IDT 2020
Troubleshooting
Here are some useful basic commands to check the status and debug the service:
1 # get all resources 2 kubectl get all --all-namespaces 3 4 # get all pods 5 kubectl get pods --all-namespaces -o wide 6 7 # get all k8s nodes 8 kubectl get nodes 9 10 # get storage classes 11 $ kubectl get sc 12 13 # get persistent volume claims 14 $ kubectl get pvc 15 16 # get persistent volumes 17 $ kubectl get pv 18 19 # kubectl describe pod/<pod-name> -n <namespace> 20 kubectl describe pod/csi-wekafsplugin-dvdh2 -n csi-wekafsplugin 21 22 # get logs from a pod kubectl logs <pod name> 23 <container name> 24 25 # get logs from the weka csi plugin 26 # container (-c) can be one of: [node-driver-registrar wekafs liveness27 kubectl logs pods/csi-wekafsplugin-<ID> --namespace csi-wekafsplugin -c
Known issues
Due to a Kubernetes v1.18 issue with allocating mixed hugepages sizes (https://github.com/kubernetes/kubernetes/pull/80831) is required that the Content Software for File system will not try to allocate mixed sizes of hugepages on the Kubernetes nodes.
To workaround the Kubernetes issue (required only if the default memory for the client has been increased):
- If the client is installed on the K8s nodes using a manual stateless client mount, set the reserve_1g_hugepages mount option to false in the mount command.
- If this is a server or a client, which is part of the Content Software for File cluster, contact customer support.