Usage
This chapter describes the settings and command examples for each component used in Storage Plug-in for Containers.
Secret settings
The Secret file contains the storage URL, user name, and password settings that are necessary for Storage Plug-in for Containers to work with your environment. The following sample provides information about the required parameters.
Parameter references for secret-sample.yaml
apiVersion: v1 kind: Secret metadata: name: secret-sample #(1) type: Opaque data: url: aHR0cDovLzE3Mi4xNi4xLjE= #(2) user: VXNlcjAx #(3) password: UGFzc3dvcmQwMQ== #(4)
Legend:
(1) Secret name
(2) base64-encoded storage URL.
Use the IP address of the SVP for the following: VSP 5000 series, VSP F400, F600, F800, VSP F1500, VSP G200, G400, G600, G800, VSP G1000, VSP G1500, and VSP N400, N600, N800. Use the IP address of the storage controller for the following: VSP E series, VSP F350, F370, F700, F900, and VSP G350, G370, G700, G900.
Example:
echo -n "http://172.16.1.1" | base64
(3) base64-encoded storage user name.
Example:
echo -n "User01" | base64
(4) base64-encoded storage password.
Example:
echo -n "Password01" | base64
StorageClass settings
The StorageClass file contains storage settings that are necessary for Storage Plug-in for Containers to work with your environment. The following sample provides information about the required parameters.
Parameter references for sc-sample.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-sample #(1) annotations: kubernetes.io/description: Hitachi Storage Plug-in for Containers provisioner: hspc.csi.hitachi.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true parameters: serialNumber: "54321" #(2) poolID: "1" #(3) portID : CL1-A,CL2-A #(4) connectionType: fc #(5) storageEfficiency: "CompressionDeduplication" #(6) storageEfficiencyMode: "PostProcess" #(7) csi.storage.k8s.io/fstype: ext4 #(8) csi.storage.k8s.io/node-publish-secret-name: "secret-sample" #(9) csi.storage.k8s.io/node-publish-secret-namespace: "default" #(10) csi.storage.k8s.io/provisioner-secret-name: "secret-sample" #(9) csi.storage.k8s.io/provisioner-secret-namespace: "default" #(10) csi.storage.k8s.io/controller-publish-secret-name: "secret-sample" #(9) csi.storage.k8s.io/controller-publish-secret-namespace: "default" #(10) csi.storage.k8s.io/node-stage-secret-name: "secret-sample" #(9) csi.storage.k8s.io/node-stage-secret-namespace: "default" #(10) csi.storage.k8s.io/controller-expand-secret-name: "secret-sample" #(9) csi.storage.k8s.io/controller-expand-secret-namespace: "default" #(10)
Legend:
(1) StorageClass name
(2) Storage serial number
(3) HDP pool ID
(4) Port ID. Use a comma separator for multipath.
(5) Connection type between storage and nodes. fc
and
iscsi
are supported. If blank, fc
is set.
(6) Activation of adaptive data reduction.
"Compression"
, "CompressionDeduplication"
, and
"Disabled"
are supported. If blank, Disabled
is set. For a storage system where the compression accelerator module is installed, if
you specify "Compression"
or "CompressionDeduplication"
for storageEfficiency
, the compression function using the compression
accelerator module is automatically activated.
(7) Execution mode of adaptive data reduction. You can specify this
parameter when storageEfficiency
is "Compression"
or
"CompressionDeduplication"
, and "Inline"
and
"PostProcess"
are supported for the parameter. If blank, the default
value is set. The default value depends on the storage system. For details on the parameter,
see the description of adaptive data reduction in the Provisioning Guide for Open Systems or Provisioning Guide.
- If the LDEV was created with Storage Plug-in for Containers, do not change the parameters related to adaptive data reduction.
- Adaptive data reduction cannot be used together with the Stretched PVC function.
(8) Filesystem type. ext4
and xfs
are supported. If blank, ext4
is set.
(9) Secret name
(10) Secret namespace
Parameter references for sc-sample.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-sample-vssb #(1) annotations: kubernetes.io/description: Hitachi Storage Plug-in for Containers provisioner: hspc.csi.hitachi.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true parameters: storageType: vssb #(2) connectionType: fc #(3) csi.storage.k8s.io/fstype: ext4 #(4) csi.storage.k8s.io/node-publish-secret-name: "secret-sample" #(5) csi.storage.k8s.io/node-publish-secret-namespace: "default" #(6) csi.storage.k8s.io/provisioner-secret-name: "secret-sample" #(5) csi.storage.k8s.io/provisioner-secret-namespace: "default" #(6) csi.storage.k8s.io/controller-publish-secret-name: "secret-sample" #(5) csi.storage.k8s.io/controller-publish-secret-namespace: "default" #(6) csi.storage.k8s.io/node-stage-secret-name: "secret-sample" #(5) csi.storage.k8s.io/node-stage-secret-namespace: "default" #(6) csi.storage.k8s.io/controller-expand-secret-name: "secret-sample" #(5) csi.storage.k8s.io/controller-expand-secret-namespace: "default" #(6)
Legend:
(1) StorageClass name
(2) Storage type. This field must be set to "vssb
" when using
VSSB.
(3) Connection type between storage and nodes. fc
and
iscsi
are supported. If blank, fc
is set.
(4) Filesystem type. ext4
and xfs
are
supported. If blank, ext4
is set.
(5) Secret name
(6) Secret namespace
PersistentVolumeClaim settings
The PersistentVolumeClaim file contains volume information that is used by Storage Plug-in for Containers to create PersistentVolumes. The following sample provides information about the required parameters.
Parameter references for pvc-sample.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sample #(1) spec: accessModes: - ReadWriteOnce #(2) resources: requests: storage: 1Gi #(3) storageClassName: sc-sample #(4)
Legend:
(1) PersistentVolumeClaim name
(2) Specify ReadWriteOnce
or ReadOnlyMany
. To use
ReadOnlyMany, see ReadOnlyMany.
(3) Volume size
(4) StorageClass name
Usage restrictions for a PersistentVolumeClaim
- If a failure occurs when creating a PersistentVolumeClaim, a
PersistentVolumeClaim object will be created without the PersistentVolume. In this case,
delete the PersistentVolumeClaim object using the command
kubectl delete pvc <PVC_NAME>
. - If a failure occurs when deleting a PersistentVolumeClaim, a
PersistentVolumeClaim object will be deleted but the PersistentVolume object will remain
and any storage asset associated with the PersistentVolume object may also remain. In
this case, see Viewing the volume properties of PersistentVolume and obtain the volume ID of the storage. Delete the
PersistentVolume using the command
kubectl delete pv <PV_NAME>
. Also, delete the storage asset (LDEV). For details, see the user guide for the storage system in your environment.
Pod settings
The Pod file contains volume information. Storage Plug-in for Containers mount volumes based on this information.
Parameter references for pod-sample.yamlapiVersion: v1 kind: Pod metadata: name: pod-sample #(1) spec: containers: - name: my-busybox image: busybox volumeMounts: - mountPath: "/data" #(2) name: sample-volume command: ["sleep", "1000000"] imagePullPolicy: IfNotPresent volumes: - name: sample-volume persistentVolumeClaim: claimName: pvc-sample #(3)
Legend:
(1) Pod name
(2) Path (path where the volume is mounted inside a container)
(3) PersistentVolumeClaim name
Command examples
# kubectl create -f secret-sample.yaml secret/secret-sample created # kubectl get secret NAME TYPE DATA AGE secret-sample Opaque 3 34s # kubectl create -f sc-sample.yaml storageclass.storage.k8s.io/sc-sample created # kubectl get sc NAME PROVISIONER AGE sc-sample hspc.csi.hitachi.com 21s # kubectl create -f pvc-sample.yaml persistentvolumeclaim/pvc-sample created # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-sample Bound pvc-cf8c6089-0386-4c39-8037-e1520a986a7d 1Gi RWO sc-sample 28s # kubectl create -f pod-sample.yaml pod/pod-sample created # kubectl get pod NAME READY STATUS RESTARTS AGE pod-sample 1/1 Running 0 20s
# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-cf8c6089-0386-4c39-8037-e1520a986a7d 1Gi RWO Delete Bound default/pvc-sample sc-sample 35s # kubectl describe pv pvc-cf8c6089-0386-4c39-8037-e1520a986a7d Name: pvc-cf8c6089-0386-4c39-8037-e1520a986a7d Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: hspc.csi.hitachi.com Finalizers: [kubernetes.io/pv-protection] StorageClass: sc-sample Status: Bound Claim: default/pvc-sample Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: <none> Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: hspc.csi.hitachi.com VolumeHandle: 60060e8018117a000051117a0000053f--spc-c44a7dc0f5 ReadOnly: false VolumeAttributes: autoHG=true connectionType=fc hostModeOption=91 ldevIDDec=1343 ldevIDHex=05:3F nickname=spc-c44a7dc0f5 ports=CL1-A,CL2-A size=1Gi storage.kubernetes.io/csiProvisionerIdentity=1585728584906-8081-hspc.csi.hitachi.com Events: <none>
# kubectl get pod NAME READY STATUS RESTARTS AGE pod-sample 1/1 Running 0 30s # kubectl delete pod pod-sample pod "pod-sample" deleted # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-sample Bound pvc-cf8c6089-0386-4c39-8037-e1520a986a7d 1Gi RWO sc-sample 46s # kubectl delete pvc pvc-sample persistentvolumeclaim "pvc-sample" deleted # kubectl get sc NAME PROVISIONER AGE sc-sample hspc.csi.hitachi.com 53s # kubectl delete sc sc-sample storageclass.storage.k8s.io "sc-sample" deleted # kubectl get secret NAME TYPE DATA AGE secret-sample Opaque 3 74s # kubectl delete secret secret-sample secret "secret-sample" deleted
Volume snapshot
- If the volume is expanded, confirm for completion before executing this feature. See Volume expansion for more details.
- Flush the data before creating a snapshot for data consistency. For example, temporarily remove the pod.
- This feature is not supported in VSSB.
Before you begin
This feature requires the following resources:
- StorageClass
- PersistentVolumeClaim
If your environment is Kubernetes, install Snapshot CRDs and Snapshot Controller per cluster (see https://github.com/kubernetes-csi/external-snapshotter). For Snapshot CRDs, use v1. For Snapshot Controller, use 4.x.x.
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: snapshotclass-sample #(1) driver: hspc.csi.hitachi.com deletionPolicy: Delete parameters: poolID: "1" #(2) csi.storage.k8s.io/snapshotter-secret-name: "secret-sample" #(3) csi.storage.k8s.io/snapshotter-secret-namespace: "default" #(4)
Legend:
(1) VolumeSnapshotClass name
(2) Same poolID as the StorageClass
(3) Same Secret name as the StorageClass
(4) Same Secret namespace as the StorageClass
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-sample #(1) spec: volumeSnapshotClassName: snapshotclass-sample #(2) source: persistentVolumeClaimName: pvc-sample #(3)
Legend:
(1) VolumeSnapshot name
(2) VolumeSnapshotClass name
(3) PersistentVolumeClaim name from which the snapshot is obtained
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-from-snapshot-sample #(1) spec: dataSource: name: snapshot-sample #(2) kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi #(3) storageClassName: sc-sample #(4)
Legend:
(1) PersistentVolumeClaim name
(2) VolumeSnapshot name
(3) Specify the size of the source
volume. Obtain the size by using the command kubectl get pv <PV_NAME> -o yaml
, which is displayed in the parameter
size.
kubectl get pv <PV_NAME>
, which is displayed in the parameter
CAPACITY.(4) Specify the same StorageClass name as the one used for dataSource.
- Create a VolumeSnapshotClass:
# kubectl create -f volumesnapshotclass-sample.yaml
- Create a VolumeSnapshot:
# kubectl create -f volumesnapshot-sample.yaml
- Confirm the completion of creating VolumeSnapshot by verifying that the
parameter readyToUse is true:
# kubectl get volumesnapshot -o yaml
NoteIf the parameter readyToUse is false, confirm the cause and solution by following the steps:- Obtain the
boundVolumeSnapshotContentName
by using the command: kubectl get volumesnapshot -o yaml - Confirm the error message by using the command:
kubectl describe volumesnapshotcontent <VolumeSnapshotContentName>
- Obtain the
- Create a PersistentVolumeClaim from a snapshot:
# kubectl create -f pvc-from-snapshot-sample.yaml
Volume cloning
- If the volume is expanded, confirm for completion before executing this feature. Refer to Volume expansion for details.
- Flush the data before cloning for data consistency. For example, temporarily remove the pod.
- This feature is not supported in VSSB.
Before you begin
This feature requires the following resources:
- StorageClass
- PersistentVolumeClaim
This YAML file is a manifest file for creating a clone from an existing volume "pvc-sample".
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-from-pvc-sample #(1) spec: dataSource: name: pvc-sample #(2) kind: PersistentVolumeClaim apiGroup: "" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi #(3) storageClassName: sc-sample #(4)
Legend:
(1) PersistentVolumeClaim name of clone
(2) PersistentVolumeClaim name of source
(3) Specify the size
of the source volume. Obtain the size by using the command kubectl get pv <PV_NAME> -o yaml
, which is displayed in the parameter
size.
kubectl get pv <PV_NAME>
, which is displayed in the parameter CAPACITY.
(4) Specify the same StorageClass name as the one used for dataSource.
- Create a PersistentVolumeClaim for a clone:
# kubectl create -f pvc-from-pvc-sample.yaml
Volume expansion
kubectl get pvc
, which is
displayed in the parameter CAPACITY. Do not shut down the OS or drain the node
before volume expansion completes. Before you begin
This feature requires the following resources:
- StorageClass
- PersistentVolumeClaim
- The minimum additional size for volume expansion is 1 GiB.
- The maximum additional size for volume expansion is 7 TiB or a value that does not exceed the warning threshold of pool capacity. If you add more than 7 TiB, execute the command again.
- Volume capacity cannot be reduced.
- The PersistentVolume created by the StorageClass without parameters for volume expansion cannot be expanded.
- The size obtained by the command kubectl get pv <PV_NAME> -o yaml is not updated after the volume is expanded. If the volume is expanded, obtain the size by using the command kubectl get pv <PV_NAME>, which is displayed in the parameter CAPACITY.
- Expand the capacity of an existing volume pvc-sample to 5GiB:
# kubectl patch pvc pvc-sample --patch \ '{"spec":{"resources":{"requests":{"storage": "5Gi"}}}}'
- Confirm the completion of volume expansion with the parameter
CAPACITY:
# kubectl get pv <PV_NAME> NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE <PV_NAME> 5Gi RWO Delete Bound default/pvc-sample sc-sample 35s
kubectl patch pvc
command.Raw block volume
Before you begin
This feature requires the StorageClass.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sample-block #(1) spec: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 1Gi #(2) storageClassName: sc-sample #(3)
Legend:
(1) PersistentVolumeClaim name
(2) Volume size
(3) StorageClass name
apiVersion: v1 kind: Pod metadata: name: pod-sample-block #(1) spec: containers: - name: my-busybox image: busybox volumeDevices: - devicePath: "/block" #(2) name: sample-volume command: ["sleep", "1000000"] imagePullPolicy: IfNotPresent volumes: - name: sample-volume persistentVolumeClaim: claimName: pvc-sample-block #(3)
Legend:
(1) Pod name
(2) Path (path where the volume is mounted in the container)
(3) PersistentVolumeClaim name
- Create a PersistentVolumeClaim for a raw block
volume:
# kubectl create -f pvc-sample-block.yaml
- Create a Pod for a raw block
volume:
# kubectl create -f pod-sample-block.yaml
ReadOnlyMany
To create a PersistentVolumeClaim with ReadOnlyMany, you must create the PersistentVolumeClaim from an existing PVC.
Use the PersistentVolumeClaim manifest file used in the Volume cloning section and specify ReadOnlyMany, as shown in the following example.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-rox-sample spec: dataSource: name: pvc-sample kind: PersistentVolumeClaim apiGroup: "" accessModes: - ReadOnlyMany # Specify "ReadOnlyMany" here. resources: requests: storage: 1Gi storageClassName: sc-sample
Resource partitioning
The following are examples of resource partitioning:
- You can restrict the range of LDEV IDs added to a resource group for a specific Kubernetes cluster.
- You can isolate the impacts between Kubernetes clusters.
Before you use the resource partitioning, the storage system settings, Secret and StorageClass settings, are required.
The following are examples of configurations in which storage system resources can be partitioned.

The following are examples of configurations that are not supported.
Example 1
You cannot include both the following configurations in the same Kubernetes cluster.
- StorageClass and Secret are configured for a resource group.
- StorageClass and Secret are temporarily configured for use with meta resource.
Example 2
If multiple resource groups are configured for a single storage system, each of those resource groups cannot correspond to a resource group in the same Kubernetes cluster.
Only one resource group (containing storageClass and Secret) per storage system can be configured for a Kubernetes cluster.
Set your storage system to meet the following requirements:
Storage system resources | Descriptions |
Resource group | You cannot use multiple resource groups for a single Kubernetes cluster. Virtual storage machines are not supported. |
Storage system user group and Storage system user | Storage system users must have access only to the resource group that you created. The storage system user must not have access to other resource groups. |
Pool | Create a pool from pool volumes with the resource group that you have created. |
LDEV | Allocate the necessary number of undefined LDEV IDs to the resource group. |
Host Group | Allocate the necessary number of undefined host group IDs to the resource group for each storage system port defined in StorageClass. The number of host group IDs must be equal to the number of hosts for all ports. |
Specify the resource group ID of the storage system.
Example of Secret settings:
apiVersion: v1 kind: Secret metadata: name: secret-sample type: Opaque data: url: aHR0cDovLzE3Mi4xNi4xLjE= user: VXNlcjAx password: UGFzc3dvcmQwMQ== stringData: resourceGroupID: "1" # Specify resource group ID
If you use iSCSI as a storage system connection, specify the port IP address in number order. If you use FC as a storage system connection, no additional setting is required for StorageClass.
Examples of StorageClass settings:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-sample provisioner: hspc.csi.hitachi.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true parameters: serialNumber: "54321" poolID: "1" portID : CL1-A,CL2-A connectionType: iscsi portIP: "192.168.10.10, 192.168.10.11" # Specify iSCSI Port IP Addresses. <...>