Skip to main content
Hitachi Vantara Knowledge

Usage

This chapter describes the settings and command examples for each component used in Storage Plug-in for Containers.

Secret settings

The Secret file contains the storage URL, user name, and password settings that are necessary for Storage Plug-in for Containers to work with your environment. The following sample provides information about the required parameters.

Parameter references for secret-sample.yaml

apiVersion: v1
kind: Secret
metadata:
  name: secret-sample	          #(1)
type: Opaque
data:
  url: aHR0cDovLzE3Mi4xNi4xLjE=	#(2)
  user: VXNlcjAx		           #(3)
  password: UGFzc3dvcmQwMQ==	   #(4)

Legend:

(1) Secret name

(2) base64-encoded storage URL.

Use the IP address of the SVP for the following: VSP 5000 series, VSP F400, F600, F800, VSP F1500, VSP G200, G400, G600, G800, VSP G1000, VSP G1500, and VSP N400, N600, N800. Use the IP address of the storage controller for the following: VSP E series, VSP F350, F370, F700, F900, and VSP G350, G370, G700, G900.

Example:

echo -n "http://172.16.1.1" | base64

(3) base64-encoded storage user name.

Example:

echo -n "User01" | base64

(4) base64-encoded storage password.

Example:

echo -n "Password01" | base64

StorageClass settings

The StorageClass file contains storage settings that are necessary for Storage Plug-in for Containers to work with your environment. The following sample provides information about the required parameters.

NoteAfter creating a StorageClass and PVC, re-creating StorageClass will not affect the existing PVCs.
StorageClass for VSP family

Parameter references for sc-sample.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-sample                       #(1)
  annotations:
    kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
  serialNumber: "54321"                 #(2)
  poolID: "1"                           #(3)
  portID : CL1-A,CL2-A                  #(4)
  connectionType: fc                    #(5)
  csi.storage.k8s.io/fstype: ext4       #(6)
  csi.storage.k8s.io/node-publish-secret-name: "secret-sample"        #(7)
  csi.storage.k8s.io/node-publish-secret-namespace: "default"         #(8)
  csi.storage.k8s.io/provisioner-secret-name: "secret-sample"         #(7)
  csi.storage.k8s.io/provisioner-secret-namespace: "default"          #(8)
  csi.storage.k8s.io/controller-publish-secret-name: "secret-sample"  #(7)
  csi.storage.k8s.io/controller-publish-secret-namespace: "default"   #(8)
  csi.storage.k8s.io/node-stage-secret-name: "secret-sample"          #(7)
  csi.storage.k8s.io/node-stage-secret-namespace: "default"           #(8)
  csi.storage.k8s.io/controller-expand-secret-name: "secret-sample"   #(7)
  csi.storage.k8s.io/controller-expand-secret-namespace: "default"    #(8)

Legend:

(1) StorageClass name

(2) Storage serial number

(3) HDP pool ID

(4) Port ID. Use a comma separator for multipath.

(5) Connection type between storage and nodes. fc and iscsi are supported. If blank, fc is set.

(6) Filesystem type. ext4 and xfs are supported. If blank, ext4 is set.

(7) Secret name

(8) Secret namespace

Storage Class for VSSB

Parameter references for sc-sample.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-sample-vssb                                               #(1)
  annotations:
    kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
  storageType: vssb                                                  #(2)
  connectionType: fc                                                 #(3)  
  csi.storage.k8s.io/fstype: ext4                                    #(4)
  csi.storage.k8s.io/node-publish-secret-name: "secret-sample"       #(5)
  csi.storage.k8s.io/node-publish-secret-namespace: "default"        #(6)
  csi.storage.k8s.io/provisioner-secret-name: "secret-sample"        #(5)
  csi.storage.k8s.io/provisioner-secret-namespace: "default"         #(6)
  csi.storage.k8s.io/controller-publish-secret-name: "secret-sample" #(5)
  csi.storage.k8s.io/controller-publish-secret-namespace: "default"  #(6)
  csi.storage.k8s.io/node-stage-secret-name: "secret-sample"         #(5)
  csi.storage.k8s.io/node-stage-secret-namespace: "default"          #(6)
  csi.storage.k8s.io/controller-expand-secret-name: "secret-sample"  #(5)
  csi.storage.k8s.io/controller-expand-secret-namespace: "default"   #(6)

Legend:

(1) StorageClass name

(2) Storage type. This field must be set to "vssb" when using VSSB.

(3) Connection type between storage and nodes. fc and iscsi are supported. If blank, fc is set.

(4) Filesystem type. ext4 and xfs are supported. If blank, ext4 is set.

(5) Secret name

(6) Secret namespace

PersistentVolumeClaim settings

The PersistentVolumeClaim file contains volume information that is used by Storage Plug-in for Containers to create PersistentVolumes. The following sample provides information about the required parameters.

Parameter references for pvc-sample.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-sample               #(1)
spec:
  accessModes:
  - ReadWriteOnce                #(2)
  resources:
    requests:
      storage: 1Gi               #(3)
  storageClassName: sc-sample    #(4)

Legend:

(1) PersistentVolumeClaim name

(2) Specify ReadWriteOnce or ReadOnlyMany. To use ReadOnlyMany, see ReadOnlyMany.

(3) Volume size

(4) StorageClass name

Usage restrictions for a PersistentVolumeClaim

  • If a failure occurs when creating a PersistentVolumeClaim, a PersistentVolumeClaim object will be created without the PersistentVolume. In this case, delete the PersistentVolumeClaim object using the command kubectl delete pvc <PVC_NAME>.
  • If a failure occurs when deleting a PersistentVolumeClaim, a PersistentVolumeClaim object will be deleted but the PersistentVolume object will remain and any storage asset associated with the PersistentVolume object may also remain. In this case, see Viewing the volume properties of PersistentVolume and obtain the volume ID of the storage. Delete the PersistentVolume using the command kubectl delete pv <PV_NAME>. Also, delete the storage asset (LDEV). For details, see the user guide for the storage system in your environment.

Pod settings

The Pod file contains volume information. Storage Plug-in for Containers mount volumes based on this information.

Parameter references for pod-sample.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-sample	        #(1)
spec:
  containers:
    - name: my-busybox
      image: busybox
      volumeMounts:
      - mountPath: "/data"	#(2)
        name: sample-volume
      command: ["sleep", "1000000"]
      imagePullPolicy: IfNotPresent
  volumes:
    - name: sample-volume
      persistentVolumeClaim:
        claimName: pvc-sample  #(3)

Legend:

(1) Pod name

(2) Path (path where the volume is mounted inside a container)

(3) PersistentVolumeClaim name

Command examples

Following are examples of creating and deleting a Secret, StorageClass, PersistentVolumeClaim, and Pod using commands in practice.

NoteIf your environment is OpenShift, replace Kubernetes Command Line Interface (CLI) with OpenShift CLI. For more information about OpenShift CLI, refer to the OpenShift CLI reference.
Create a Secret, StorageClass, PersistentVolumeClaim, and Pod

# kubectl create -f secret-sample.yaml
secret/secret-sample created

# kubectl get secret
NAME            TYPE     DATA   AGE
secret-sample   Opaque   3      34s

# kubectl create -f sc-sample.yaml
storageclass.storage.k8s.io/sc-sample created

# kubectl get sc
NAME        PROVISIONER            AGE
sc-sample   hspc.csi.hitachi.com   21s

# kubectl create -f pvc-sample.yaml
persistentvolumeclaim/pvc-sample created

# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-sample   Bound    pvc-cf8c6089-0386-4c39-8037-e1520a986a7d   1Gi        RWO            sc-sample      28s

# kubectl create -f pod-sample.yaml
pod/pod-sample created

# kubectl get pod
NAME         READY   STATUS    RESTARTS   AGE
pod-sample   1/1     Running   0          20s
Confirm a PersistentVolume information created by Storage Plug-in for Containers

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-cf8c6089-0386-4c39-8037-e1520a986a7d   1Gi        RWO            Delete           Bound    default/pvc-sample   sc-sample               35s

# kubectl describe pv pvc-cf8c6089-0386-4c39-8037-e1520a986a7d
Name:            pvc-cf8c6089-0386-4c39-8037-e1520a986a7d
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: hspc.csi.hitachi.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    sc-sample
Status:          Bound
Claim:           default/pvc-sample
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            hspc.csi.hitachi.com
    VolumeHandle:      60060e8018117a000051117a0000053f--spc-c44a7dc0f5
    ReadOnly:          false
    VolumeAttributes:   autoHG=true
                           connectionType=fc
                           hostModeOption=91
                           ldevIDDec=1343
                           ldevIDHex=05:3F
                           mode=normal
                           nickname=spc-c44a7dc0f5
                           ports=CL1-A,CL2-A
                           size=1Gi
                           storage.kubernetes.io/csiProvisionerIdentity=1585728584906-8081-hspc.csi.hitachi.com
Events:                <none>
Delete a Secret, StorageClass, PersistentVolumeClaim, and Pod

# kubectl get pod
NAME         READY   STATUS    RESTARTS   AGE
pod-sample   1/1     Running   0          30s

# kubectl delete pod pod-sample
pod "pod-sample" deleted

# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-sample   Bound    pvc-cf8c6089-0386-4c39-8037-e1520a986a7d   1Gi        RWO            sc-sample      46s

# kubectl delete pvc pvc-sample
persistentvolumeclaim "pvc-sample" deleted

# kubectl get sc
NAME        PROVISIONER        AGE
sc-sample   hspc.csi.hitachi.com   53s

# kubectl delete sc sc-sample
storageclass.storage.k8s.io "sc-sample" deleted

# kubectl get secret
NAME            TYPE     DATA   AGE
secret-sample   Opaque   3      74s

# kubectl delete secret secret-sample
secret "secret-sample" deleted

Volume snapshot

This feature can create a snapshot that is a point-in-time image of a volume. A snapshot can be used to duplicate a previous state of an existing volume.

Note
  • If the volume is expanded, confirm for completion before executing this feature. See Volume expansion for more details.
  • Flush the data before creating a snapshot for data consistency. For example, temporarily remove the pod.
  • This feature is not supported in VSSB.

Before you begin

This feature requires the following resources:

  • StorageClass
  • PersistentVolumeClaim

If your environment is Kubernetes, install Snapshot CRDs and Snapshot Controller per cluster (see https://github.com/kubernetes-csi/external-snapshotter). For Snapshot CRDs, use v1. For Snapshot Controller, use 4.x.x.

NoteIf Snapshot Alpha or Beta CRDs are present in your environment, remove them before installing Snapshot v1 CRDs.
Parameter references for volumesnapshotclass-sample.yaml
apiVersion: snapshot.storage.k8s.io
kind: VolumeSnapshotClass
metadata:
  name: snapshotclass-sample	#(1)
driver: hspc.csi.hitachi.com
deletionPolicy: Delete
parameters:
  poolID: "1"			    	#(2)
  csi.storage.k8s.io/snapshotter-secret-name: "secret-sample" #(3)
  csi.storage.k8s.io/snapshotter-secret-namespace: "default"  #(4)

Legend:

(1) VolumeSnapshotClass name

(2) Same poolID as the StorageClass

(3) Same Secret name as the StorageClass

(4) Same Secret namespace as the StorageClass

Parameter references for volumesnapshot-sample.yaml
apiVersion: snapshot.storage.k8s.io
kind: VolumeSnapshot
metadata:
  name: snapshot-sample                                 #(1)
spec:
  volumeSnapshotClassName: snapshotclass-sample         #(2)
  source:
    persistentVolumeClaimName: pvc-sample               #(3)

Legend:

(1) VolumeSnapshot name

(2) VolumeSnapshotClass name

(3) PersistentVolumeClaim name from which the snapshot is obtained

Parameter references for pvc-from-snapshot-sample.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-from-snapshot-sample    #(1)
spec:
  dataSource:
    name: snapshot-sample           #(2)
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi                 #(3)
  storageClassName: sc-sample      #(4)

Legend:

(1) PersistentVolumeClaim name

(2) VolumeSnapshot name

(3) Specify the size of the source volume. Obtain the size by using the command kubectl get pv <PV_NAME> -o yaml, which is displayed in the parameter size.

NoteIf the volume is expanded, obtain the size by using the command kubectl get pv <PV_NAME>, which is displayed in the parameter CAPACITY.

(4) Specify the same StorageClass name as the one used for dataSource.

Command examples
  • Create a VolumeSnapshotClass:

    # kubectl create -f volumesnapshotclass-sample.yaml 
  • Create a VolumeSnapshot:

    # kubectl create -f volumesnapshot-sample.yaml 
  • Confirm the completion of creating VolumeSnapshot by verifying that the parameter readyToUse is true:

    # kubectl get volumesnapshot -o yaml
    NoteIf the parameter readyToUse is false, confirm the cause and solution by following the steps:
    1. Obtain the boundVolumeSnapshotContentName by using the command: kubectl get volumesnapshot -o yaml
    2. Confirm the error message by using the command: kubectl describe volumesnapshotcontent <VolumeSnapshotContentName>
  • Create a PersistentVolumeClaim from a snapshot:

    # kubectl create -f pvc-from-snapshot-sample.yaml

Volume cloning

This feature can create a duplicate as a clone of an existing volume. A clone can be consumed in the same way as any standard volume. It is different from a standard volume in that the backend device creates an exact duplicate of the specified volume at provisioning.
Note
  • If the volume is expanded, confirm for completion before executing this feature. Refer to Volume expansion for details.
  • Flush the data before cloning for data consistency. For example, temporarily remove the pod.
  • This feature is not supported in VSSB.

Before you begin

This feature requires the following resources:

  • StorageClass
  • PersistentVolumeClaim
Parameter references for pvc-from-pvc-sample.yaml

This YAML file is a manifest file for creating a clone from an existing volume "pvc-sample".

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-from-pvc-sample      #(1)
spec:
  dataSource:
    name: pvc-sample             #(2)
    kind: PersistentVolumeClaim
    apiGroup: ""
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi               #(3)
  storageClassName: sc-sample    #(4)

Legend:

(1) PersistentVolumeClaim name of clone

(2) PersistentVolumeClaim name of source

(3) Specify the size of the source volume. Obtain the size by using the command kubectl get pv <PV_NAME> -o yaml, which is displayed in the parameter size.

NoteIf the volume is expanded, obtain the size by using the command kubectl get pv <PV_NAME>, which is displayed in the parameter CAPACITY.

(4) Specify the same StorageClass name as the one used for dataSource.

Command examples
  • Create a PersistentVolumeClaim for a clone:

    # kubectl create -f pvc-from-pvc-sample.yaml 

Volume expansion

This feature can expand the capacity of an existing volume. There is no need to delete and recreate the Pod for volume expansion.
CautionConfirm completion of volume expansion with the command kubectl get pvc, which is displayed in the parameter CAPACITY. Do not shut down the OS or drain the node before volume expansion completes.

Before you begin

This feature requires the following resources:

  • StorageClass
  • PersistentVolumeClaim
NoteVolume expansion has the following restrictions:
  • The minimum additional size for volume expansion is 1 GiB.
  • The maximum additional size for volume expansion is 7 TiB or a value that does not exceed the warning threshold of pool capacity. If you add more than 7 TiB, execute the command again.
  • Volume capacity cannot be reduced.
  • The PersistentVolume created by the StorageClass without parameters for volume expansion cannot be expanded.
  • The size obtained by the command kubectl get pv <PV_NAME> -o yaml is not updated after the volume is expanded. If the volume is expanded, obtain the size by using the command kubectl get pv <PV_NAME>, which is displayed in the parameter CAPACITY.
Command examples
  • Expand the capacity of an existing volume pvc-sample to 5GiB:

    # kubectl patch pvc pvc-sample --patch \
    '{"spec":{"resources":{"requests":{"storage": "5Gi"}}}}' 
  • Confirm the completion of volume expansion with the parameter CAPACITY:
    # kubectl get pv <PV_NAME>
    NAME	       CAPACITY     ACCESS MODES      RECLAIM POLICY
    STATUS CLAIM                STORAGECLASS      REASON AGE
    <PV_NAME>      5Gi          RWO               Delete
    Bound default/pvc-sample    sc-sample               35s
    

Raw block volume

Kubernetes supports block volumes in addition to filesystem volumes. This section describes how to apply a raw block volume.

Before you begin

This feature requires the StorageClass.

Parameter references for pvc-sample-block.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-sample-block	#(1)
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi	#(2)
  storageClassName: sc-sample	#(3)

Legend:

(1) PersistentVolumeClaim name

(2) Volume size

(3) StorageClass name

Parameter references for pod-sample-block.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-sample-block	           #(1)
spec:
  containers:
    - name: my-busybox
      image: busybox
      volumeDevices:
      - devicePath: "/block"	#(2)
        name: sample-volume
      command: ["sleep", "1000000"]
      imagePullPolicy: IfNotPresent
  volumes:
    - name: sample-volume
      persistentVolumeClaim:
        claimName: pvc-sample-block    #(3)

Legend:

(1) Pod name

(2) Path (path where the volume is mounted in the container)

(3) PersistentVolumeClaim name

Command examples
  • Create a PersistentVolumeClaim for a raw block volume:
    # kubectl create -f pvc-sample-block.yaml
  • Create a Pod for a raw block volume:
    # kubectl create -f pod-sample-block.yaml

ReadOnlyMany

You can mount a volume on one or many nodes in your Kubernetes cluster and perform read-only operations.

To create a PersistentVolumeClaim with ReadOnlyMany, you must create the PersistentVolumeClaim from an existing PVC.

Use the PersistentVolumeClaim manifest file used in the Volume cloning section and specify ReadOnlyMany, as shown in the following example.

NoteThis feature is not supported in VSSB.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-rox-sample
spec:
  dataSource: 
    name: pvc-sample
    kind: PersistentVolumeClaim
    apiGroup: ""
  accessModes:
  - ReadOnlyMany # Specify "ReadOnlyMany" here.
  resources:
    requests:
      storage: 1Gi
  storageClassName: sc-sample

Resource partitioning

By using this function, you can partition storage system resources for each Kubernetes cluster.

The following are examples of resource partitioning:

  • You can restrict the range of LDEV IDs added to a resource group for a specific Kubernetes cluster.
  • You can isolate the impacts between Kubernetes clusters.
NoteResource partitioning is not supported for VSSB.

Before you use the resource partitioning, the storage system settings, Secret and StorageClass settings, are required.

Supported configurations

The following are examples of configurations in which storage system resources can be partitioned.

GUID-1583608D-33A8-424E-9BEE-50CEB9A688E6-low.png

The following are examples of configurations that are not supported.

  • Example 1

    A configuration is not supported if the following are mixed for the same storage system in a single Kubernetes cluster: Secret and StorageClass are set for a resource group, and Secret and StorageClass are not set for a resource group.

    GUID-C9C470B9-8678-4965-A1DC-B22CAEE1F3DB-low.png

  • Example 2

    A configuration is not supported if the following are mixed for the same storage system in a single Kubernetes cluster: Secret and StorageClass are set more than once for different resource groups.

    GUID-17E47CE1-E4AB-4A77-963D-27D7BD8C5637-low.png

Storage system requirements and settings

Set your storage system to meet the following requirements:

Storage system resourcesDescriptions
Resource groupOnly one resource group for one Kubernetes is supported. Virtual storage machines are not supported.
Storage system user group and Storage system userStorage system users must have access only to the resource group that you created. The storage system user must not have access to other resource groups.
PoolCreate a pool from pool volumes with the resource group that you have created.
LDEVAllocate the necessary number of undefined LDEV IDs to the resource group.
Host GroupAllocate the necessary number of undefined host group IDs to the resource group for each storage system port defined in StorageClass. The number of host group IDs must be equal to the number of hosts for all ports.
Secret settings

Specify the resource group ID of the storage system.

Example of Secret settings:

apiVersion: v1
kind: Secret
metadata:
  name: secret-sample
type: Opaque
data:
  url: aHR0cDovLzE3Mi4xNi4xLjE= 
  user: VXNlcjAx
  password: UGFzc3dvcmQwMQ==
stringData:
  resourceGroupID: "1"    # Specify resource group ID
StorageClass settings

If you use iSCSI as a storage system connection, specify the port IP address in number order. If you use FC as a storage system connection, no additional setting is required for StorageClass.

Examples of StorageClass settings:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-sample
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
  serialNumber: "54321"
  poolID: "1"
  portID : CL1-A,CL2-A
  connectionType: iscsi
  portIP: "192.168.10.10, 192.168.10.11"    # Specify iSCSI Port IP Addresses.
<...>

 

  • Was this article helpful?