Skip to main content
Outside service Partner
Hitachi Vantara Knowledge

Storage Plug-in for Containers Quick Reference Guide v1.0.0

Hitachi Storage Plug-in for Containers lets you create containers and run stateful applications inside those containers by using Hitachi VSP series volumes as dynamically provisioned persistent volumes. This Quick Reference Guide provides an implementation overview and describes the usage requirements, installation, and configuration of Storage Plug-in for Containers.

Overview

About Hitachi Storage Plug-in for Containers

Storage Plug-in for Containers is a software component that contains libraries, settings, and commands that you can use to create a container in order to run your stateful applications. It enables the stateful applications to persist and maintain data after the life cycle of the container has ended. Storage Plug-in for Containers provides persistent volumes from Hitachi VSP series storage.

Storage Plug-in for Containers utilizes built-in high-availability to enable a Docker swarm manager or a Kubernetes master node to orchestrate storage tasks between hosts in a cluster. However, Storage Plug-in for Containers can also be used in non-clustered environments.

Following are examples of containerized environments, as clustered implementations, using a Docker swarm manager and Kubernetes master node. In both examples, the Hitachi Configuration Manager server is optional.

Figure 1: Clustered implementation using Docker Swarm Manager

The following table describes the components that comprise a containerized environment using Docker.

Components of a containerized environment using Docker
ComponentPurpose
conf

The configuration file for Storage Plug-in for Containers. All conf files in a cluster must be the same.

ContainerCreated by Storage Plug-in for Containers, it contains libraries and settings that are required to run stateful applications.
Docker swarmConsists of a manager and worker nodes (in the example, Node 1 and Node 2) that run services:
  • A manager distributes tasks across the cluster and orchestrates the worker nodes (or, workers) that comprise the swarm.
  • Workers run Docker containers assigned to them by a manager.
etcdStores LDEV information related to Hitachi storage, and must be installed on all hosts in the cluster.
hctlSupports the command line interface for running snapshot create, restore, and removal commands, and commands for checking volumes.
Hitachi Thin ImageHTI is used for taking snapshots of container data.
Hitachi VSP series storageProvides storage for the containers, and supports snapshots of container data.
logLog file for Storage Plug-in for Containers.
Storage Plug-in for ContainersFor Docker, Storage Plug-in for Containers must be installed on all hosts in a cluster.
SVP/Hitachi Configuration ManagerUsed for communication between Storage Plug-in for Containers and Hitachi VSP series storage.
Figure 2: Clustered implementation using Kubernetes Master node

The following table describes the components that comprise a containerized environment using Kubernetes.

Components of a containerized environment using Kubernetes
ComponentPurpose
Hitachi VSP series storageProvides storage for the containers, and supports cloning of container data.
hspc.yamlThe configuration file for the Storage Plug-in for Containers pod.
Kubernetes clusterA master node and a set of worker nodes that typically run in a distributed environment on multiple nodes.
logLog file for Storage Plug-in for Containers.
PersistentVolumeClaim.yamlThe configuration file for the Persistent Volume Claim.
StorageClass.yamlThe configuration file for Storage Class.
Storage Plug-in for ContainersThe Master node deploys only one Storage Plug-in for Containers Pod in a cluster.
SVP/Hitachi Configuration ManagerUsed for communication between Storage Plug-in for Containers and Hitachi VSP series storage.

Docker Swarm or Kubernetes framework

Storage Plug-in for Containers supports the managed plug-in system of Docker and the external provisioner of Kubernetes. Depending on your site's requirements, you can employ either orchestrator for your containerized environment. The framework option you choose depends on the current (or future planned) scale of your environment.

The following table summarizes the types of environment configurations that best fit each option.

Docker and Kubernetes options
OptionDetails
Docker

What it is:

With Docker Swarm manager, clients are entirely stateless and the entire state is managed on the Docker host. An odd number of managers is recommended in case of a network split.

When to use it:

Best for sites that have small-scale container needs.

Kubernetes

What it is:

Kubernetes is container agnostic, and is a platform that allows your site to maintain deploying containers to production. It provides the means to deploy, scale, and monitor your containers.

When to use it:

Best for sites that currently have--or plan to have--large scaling needs (for example, a Web application with a large/growing volume of users).

About the environment setup tasks

Storage Plug-in for Containers enables dynamic operation of storage systems when containers are used. In order to use Storage Plug-in for Containers with Docker or Kubernetes, pre-installation tasks must be completed.

After the environment is set up, you can manage container data.

  1. Requirements
  2. Docker framework
  3. Kubernetes framework

Requirements

Before you install Hitachi Storage Plug-in for Containers, check that your sever and storage meet the minimum requirements that are outlined in the following tables.

Server requirements

Before you prepare your Docker or Kubernetes framework for the installation of Storage Plug-in for Containers, ensure that the server meets the following requirements.

Server requirements
ComponentRequirement
CPUx86_64
Memory2 GB
RHELv7.0 or later
Docker v1.13.0 or later
Multipathing software (optional)DeviceMapper
etcdv3 or later

Applies to Docker swarm only.

Kubernetes1.6 only
User accountRoot user required

Storage requirements

Before you prepare your Docker or Kubernetes framework for the installation of Storage Plug-in for Containers, ensure that your storage meets the following requirements.

Storage requirements
ComponentRequirement
ModelVSP Gx00 / VSP G1x00 / VSP Fx00 / VSP F1x00
SVOS7.1 or later
InterfaceFC/iSCSI
User accountStorage Administrator

View and Modify permissions are required.

License
  • Hitachi Dynamic Provisioning (HDP), required.
  • Hitachi Thin Image (HTI), optional, but required for snapshots and clones.
Hitachi Configuration Manager (optional)8.5.2 or later

Pre-installation tasks

Server pre-installation

The following table outlines the pre-installation tasks for each server component.

Server pre-installation tasks
Component Tasks
Docker 1.13+Install Docker on RHEL:

https://docs.docker.com/engine/installation/linux/docker-ee/rhel/

FCCheck the HBA WWN on all hosts:

# cat /sys/class/fc_host/host<number>/port_name

iSCSICreate an iSCSI initiator and check its IQN on all hosts:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-iscsi.html

Hitachi Configuration Manager server (Optional)Refer to the Hitachi Command Suite Configuration Manager REST API Reference Guide
Swarm only
Docker SwarmSetup the Docker swarm:

https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/

etcdInstall etcd:

https://coreos.com/etcd/docs/latest/getting-started-with-etcd.html#setting-up-etcd

Note You must install etcd on every node in the swarm.
Kubernetes only
Kubernetes 1.6Install Kubernetes:

https://v1-6.docs.kubernetes.io/docs/getting-started-guides/kubeadm/

Storage pre-installation

The following table outlines the pre-installation tasks to be completed for each storage component.

Storage pre-installation tasks
ComponentTask
FC connection
  1. Create a Host group.
  2. Add all WWNs of hosts that will join the Swarm cluster into the host group.
ISCSI connection
  1. Create an ISCSI Target.
  2. Add all IQNs of hosts that will join the Swarm cluster into the iSCSI Target.
Program products
  • Enable Hitachi Dynamic Provisioning (HDP) license
  • Optional: Enable Hitachi Thin Image (HTI) license
Pool
  • Create an HDP pool
  • Optional: Create a snapshot (HTI) pool

Best practice

The following is an important point to keep in mind when working in the Docker or Kubernetes framework.

To prevent data corruption, only one node should access a volume for Read/Write operations.

NoteFor Docker, Storage Plug-in for Containers programmatically prevents a volume from being accessed from multiple nodes. This is not the case for Kubernetes (no such built-in handling exists).

Docker framework

The installation for the Docker framework involves installing Storage Plug-in for Containers and using the commands packaged with the plug-in to create a container, create a persistent volume, and attach the persistent volume to the container.

About installing Storage Plug-in for Containers

The file, hspc.tar.gz, contains all required components (binary files, scripts, etc.) that you will use to create a container and manage container data.

You can run Storage Plug-in for Containers from the Docker swarm manager (for clustered environments) or on a designated node (for non-clustered environments). In a Docker swarm, Storage Plug-in for Containers must exist on both the swarm manager and each worker node.

During the installation, you will work with a configuration file, config.json, which contains storage, host group, and other required settings that you will specify about your environment (for example, the HDP pool ID to use). After you modify it, it will be used by Storage Plug-in for Containers.

About configuring storage and host group settings

The config.json file contains storage, host group, and other settings that are necessary for Storage Plug-in for Containers to work with your environment.

The following table provides information about the required parameters that are associated with this file.

Parameters of the config.json file

Parameters and parameter descriptions

base

Storage base configuration

serialNumber

Storage serial number

IP

Storage IP address*

user

Storage user ID

password

Storage user password

options

Storage configuration

poolID

HDP Pool ID

snapshotpoolID

HTI Pool ID

scsiTarget

Host Group settings

portID

(This is the Host Group port ID.)

scsiTargetNumber

(This is the Host Group target number.)

databaseIP

etcd address*

* IPv6 is not supported.
Example of config.json file with specified parameters
{	 	 	 	 
 	"base":{
 	 	"serialNumber": 54321,
 	 	"ip": "192.168.0.1:443",
 	 	"user": "User01",
 	 	"password": "Password01"
 	},	 	 
 	"options": {
 	 	"poolId": 4,
 	 	"snapshotPoolId", 5
 	 	"scsiTarget":[{
 	 	 	"portId": "CL7-A",
 	 	 	"scsiTargetNumber": 86
 	 	}]	 
 	},	 	 
 	"dataBaseIp": "localhost:2379"
}	 	 	 	 

Install Storage Plug-in for Containers for Docker framework

This procedure describes how to install Storage Plug-in for Containers so you can create a container.

During the installation, you will use the config.json file to specify storage, host group, and other required environment settings. For information about this file and the parameters you will work with, see About configuring storage and host group settings.

Procedure

  1. Place the file, hspc.tar.gz, on the Docker swarm manager or designated node.

  2. Extract the file content: # tar -xvf hspc.tar.gz

  3. Install Storage Plug-in for Containers and create a container:

    #./installer_hspcd.sh
  4. Check if Storage Plug-in for Containers is installed:

    #docker plugin ls
    NoteIf you have your own registry, use the -r option:
    #./installer_hspcd.sh -r <registry_ip>:<port>
    After completing this step, If you run #docker plugin ls, ENABLED should indicate 'false'.
  5. Complete one of the following options:

    • If you do not have your own registry, repeat steps 1-3 to install the plug-in on all hosts that will join the swarm cluster.
    • If you have your own registry, use the following commands:
      #docker plugin push <registry_ip>:<port>/hspc
      #docker plugin install <registry_ip>:<port>/hspc
      Afterward, complete steps 6, 7, and 9. Skip step 8.
  6. Copy configSample.json to config.json:

    #cd /opt/hitachi/hspc #cp configSample.json config.json
  7. Specify your environment settings (storage, hosts, and so on) by modifying the config.json file to include required parameters:

    #vi config.json
  8. Enable Storage Plug-in for Containers on all hosts:

    #docker plugin enable hspc The log file, hspc-d.log, is created. If you run #docker plugin ls, ENABLED should indicate 'true'.
  9. Confirm the existence of the log file:

    # tail -f /opt/hitachi/hspc/log/hspc-d.log

Manage data

After Storage Plug-in for Containers is installed for your Docker environment, you can complete tasks such as inspecting volumes and creating snapshots.

TipFor help with command usage, run the -h or --help option.
Data management tasks for the Docker environment
TaskDetails
Manage volumes
  • Inspect volumes
  • Create volumes
  • Delete volumes
Use Docker commands
Manage snapshots
  • Create snapshots
  • Delete snapshots
Use hctl commands
Note
  • The maximum number of concurrent volume creations/deletions is approximately 20 (depending on the workload of the Configuration Manager and storage device).
  • Regarding Docker commands, the -replica option for "docker service create" is not supported for use with Storage Plug-in for Containers.

Command references

There are several Hitachi specific commands you will use with Storage Plug-in for Containers in a Docker environment, for snapshots: They are hctl prefaced commands.

There are also some Docker commands that include Hitachi specific options and information:

  • docker volume create

    The ext4 volume format is supported. Also, it is possible to create a volume with an existing volume name (given options are ignored, however).

  • docker volume inspect
NoteFor Docker specific commands, such as those used for detaching a persistent volume and stopping a container, refer to the Docker Web site: https://docs.docker.com/reference/

The following tables describe commands, command options, and usage rules for hctl, docker volume create, and docker volume inspect.

hctl commands
CommandUsage

hctl snapshot create

Create a snapshot of a persistent volume.

The maximum number of snapshots that can be created for storage is 8,192.

hctl snapshot rm

Delete a snapshot of a persistent volume.

hctl snapshot restore

Restore a snapshot of a persistent volume.

hctl command options
OptionDetails

--sourceVolName <volume name>

Specifies the source volume name for a snapshot.

--mu <mu number>

Indicates the mu number of snapshots.

Docker volume create command options
OptionDetails

--driver hspc

Uses Storage Plug-in for Containers as a volume driver.

--name <volume name>

Applies <volume name> to a persistent volume.

The following characters cannot be used: . / \

--opt size=<volume size with unit>

Specifies the volume size. Unit must be included (M,G,T)

--opt mode=clone

Creates a clone of an existing volume.

Use with the sourceVolName option.

--opt sourceVolName=<volume name>

Specifies the source volume name for a clone.

Docker volume inspect command output
ItemDetails
"Driver": "hspc:latest",

"Driver" refers to "<drivername>:<tag>"

[Normal Mode]
"Options": {
            "size": "<SIZE>"
          },
[Clone Mode]
"Options": {
            "mode": "<CREATE MODE>",
            "sourceVolName": "<SOURCEVOLNAME>"}, 
"Options" is set by driver specific options in 'docker volume create'.
  • If a normal volume is created, only the "size" option is displayed.
  • If a clone volume is created, the "mode" and "sourceVolName" options are displayed.
"Status": {
 "Ldev": <LDEVID>, 
 "Size": "<SIZE>", 
 "Snapshots": [
   { 
     "MU": <MIRROR UNIT>,
     "PairStatus": "<PAIR STATUS>", 
     "SplitTime": "<SPLIT TIME>" 
   } 
 ], 
 "Status": "<VOLUME STATUS>", 
 "VolAttr": [<VOLUME ATTRIBUTE>] 
}
"Status" is a vendor specific value.
  • "Ldev" means LDEV ID. The ID is decimal.
  • "Size" means volume capacity. Size includes units down to second decimal places.
  • "Snapshots" is snapshot information, including split time, mu (mirror unit) and pair status (for example, 'PSUS'). If a snapshot does not exist, the "Snapshots" parameter is empty.
  • "Status" means LDEV status. (for example, NML (normal)/BLK (block)...)
  • "VolAttr" means volume attribute. (for example, HDP (Dynamic provisioning volume)/HTI (Snapshot or Clone volume)....)

Docker examples

Following are examples of tasks (creating a clone and restoring a snapshot for Ubuntu) using commands in practice.

Figure 1: Volume clone
Figure 2: Snapshot restore
Figure 3: Snapshot restore (continued)
Figure 4: Snapshot restore (continued)

Kubernetes framework

The installation for the Kubernetes framework involves installing Storage Plug-in for Containers and using the commands packaged with the plug-in to create a service account, Pod, storage class, and persistent volume claim.

Install Storage Plug-in for Containers for Kubernetes framework

The Storage Plug-in for Containers package, hspc.tar.gz, contains all required components (binary files, scripts, etc.) that you will use to create a service account, create a Storage Plug-in for Containers Pod, and then verify the outcome.

In a clustered environment, the installation must be completed on all nodes. During the installation, there are several key files that you will be working with. The following table summarizes each file.

Configuration files for the Kubernetes environment
FileDetails

hspc-sa.yaml

Contains information that is used for the Kubernetes service account.

hspc-pod.yaml

Contains information that ensures the Storage Plug-in for Containers Pod works within the Kubernetes framework.

sc-sample.yaml

Contains information about the Kubernetes storage class and provides support for merged storage and merged pools.

Procedure

  1. Obtain hspc.tar.gz and extract the contents to a temp directory:

    # tar -xvf hspc.tar.gz
  2. Install the Kubernetes plug-in container image:

    #./installer_hspck.sh
    NoteIf you have your own registry, you can use push and pull:
    #docker push <registry_ip>:<port>/hspc
    #docker pull <registry_ip>:<port>/hspc
  3. Complete the following steps on the master node:

    1. Create a service account:

      # cd /opt/hitachi/hspc/yaml# kubectl create -f hspc-sa.yaml
    2. Check if the service account was created:

      # kubectl get sa
    3. Bind the service accout of cluster-admin to the service account you created at step a:

      # kubectl create clusterrolebinding hspc-bind--clusterrole=cluster-admin--serviceaccount=default:hspc
    4. Check if the bind was established:

      # kubectl get clusterrolebinding -o wide
    5. Create a Pod for Storage Plug-in for Containers:

      # kubectl create -f hspc-pod.yaml
    6. Check the Pod where the node is running:

      # kubectl get pod -o wide
    7. Check the log file of Storage Plug-in for Containers on the node you confirmed at step f: # tail -f /opt/hitachi/hspc/log/hspc-k.log

Manage data

After installing and working with Storage Plug-in for Containers for the Kubernetes environment, you can complete tasks such as creating a storage class or deleting a persistent volume claim.

Data management tasks for the Kubernetes environment
TaskDetails
Create a storage class

Edit the file: sc-sample.yaml

The storage class contains storage information.

Create a persistent volume claim

Edit the file: pvc-sample-yaml

One persistent volume claim for each volume is required.

Delete a persistent volume claim

Delete a persistent volume claim and persistent volume.

Usage restrictions for a persistent volume claim
TaskNotes
Create a persistent volume claim

If a failure occurs when creating a persistent volume claim, a persistent volume claim object will be created but not with the persisent volume.

In this case, delete the persistent volume claim object using the following command:

kubectl delete pvc <pvc name>

Delete a persistent volume claim

If a failure occurs when deleting a persistent volume claim, a persistent volume claim object will be deleted but the persistent volume object will remain and any storage asset associated with the persistent volume object may also remain.

In this case, delete the persistent volume using the following command:

kubectl delete pv <pv name>

Also, delete the storage asset (LDEV). Refer to the storage manual for your environment for details.

About configuring storage and host group settings

The StorageClass.yaml file contains storage, host group, and other settings that are necessary for Storage Plug-in for Containers to work with your environment.

The following table provides information about the required parameters that are associated with this file.

Parameter references for StorageClass.yaml

ParametersDescription
Type: StorageClass-
apiVersion: storage.k8s.io/v1-
metadata:-
name: sc-sample StorageClass Name
provisioner: hitachi.io/hspcProvisioner Name
parameters:-
serialNumber: "54321" Storage serial number
ip: 172.16.1.1:23450 IP Address*
user: User01 Storage user ID
password: Password01 Storage password
poolId: "1"HDP Pool ID
scsiTargetId: CL1-A-2Scsi Target ID
iscsiTargetIQN: iqn.2014-04.jp.co.hitachi:xxx.h70.i.62510.1A.FFiSCSI Target IQN
* IPv6 is not supported.

About configuring persistent volume claim settings

The Persistent Volume Claim file, PVC.yaml, contains volume information that is used by Storage Plug-in for Containers to create persistent volumes.

The following table provides information about the required parameters that are associated with this file.

Parameter references for PVC.yaml

ParameterDescription
Type: PersistentVolumeClaim-
apiVersion: v1-
metadata:-
name: pvc-sample PVC Name
specifications:
storageClassName: sc-sample StorageClass Name
accessModes: - ReadWriteOnce
resources: requests:
Volume Size storage: 100M

Command references

The following table lists the commands you will use with Storage Plug-in for Containers to create a storage class and a persistent volume claim.

Commands for use with Kubernetes

CommandUsage
# kubectl create -f pvc-sample.yaml

Create a storage class.

# kubectl create -f sc-sample.yaml

Create a persistent volume claim.

# kubectl delete <pvc name>

Delete a persistent volume claim.

The persistent volumes for Kubernetes that are created by Storage Plug-in for Containers use the parameters that are outlined in the following table. You can view applied values in your environment using the following command:

# kubectl describe pv <pv name>

Parameter references
ParameterNotes
Name: pvc-<UUID>--
Labels: <none>
hitachi.io/LDEVID=123LDEV ID
hitachi.io/mode=normalProvision mode
hitachi.io/provisionerName=HitachiProvisioner name
hitachi.io/storageConfig={"serialNumber":54321,"ip":"172.16.1.2","secure":false,"user":"User01","password":"Password01","lock":false}Storage configuration for Storage Plug-in for Containers
pv.kubernetes.io/provisioned-by=hitachi.io/hspc--
StorageClass: sc-test
Status: Bound
Reclaim Policy: Delete

(Only the Delete Reclaim Policy is supported.)

Access Modes: RWO
Capacity: 1G
Message:
Source:
Events: <none>

Kubernetes examples

Following are examples of creating a persistent volume claim and deleting a persistent volume claim using commands in practice.

Figure 1: Create a persistent volume claim
Figure 2: Delete a persistent volume claim

Uninstall Storage Plug-in for Containers

When needed, you can uninstall Storage Plug-in for Containers from the Docker or the Kubernetes framework.

Uninstall Storage Plug-in for Containers from the Docker framework

The following instructions describe how to uninstall Storage Plug-in for Containers from the Docker framework, which includes removing any containers, volumes, snapshots, and the plug-in.

Procedure

  1. Stop all containers/services which are using the volumes created by Storage Plug-in for Containers.

  2. Check the volume name: docker volume ls

  3. Check if the specified volume has a snapshot:

    docker volume inspect <VOLUME_NAME>
    • If it exists, proceed to step 4.
    • If it does not exist, proceed to step 5.
  4. Delete the snapshot:

    hctl snapshot rm -sourceVolName <VOLUME_NAME> -mu <MU#>
  5. Delete the volume:

    docker volume rm <VOLUME_NAME>
    NoteIf a clone copy is currently in process, wait until the process completes before deleting the volume.
  6. Complete the following steps on all hosts:

    1. Check the driver name: docker plugin ls

    2. Disable the plug-in:

      docker plugin disable <DRIVER_NAME>:<TAG>
      NoteIf you use your own registry, add the registry IP address and port to the driver name.
    3. Remove the plug-in:

      docker plugin rm <DRIVER_NAME>:<TAG>
      NoteIf you use your own registry, add the registry IP address and port to the driver name.

Uninstall Storage Plug-in for Containers from the Kubernetes framework

The following instructions describe how to uninstall Storage Plug-in for Containers from the Kubernetes framework, which includes removing any Persistent Volume Claims, persistent volumes, storage classes, Storage Plug-in for Containers pods, BINDs and service accounts.

Procedure

  1. Delete all pods which are using the volumes created by Storage Plug-in for Containers.

  2. Check the Persistent Volume Claim (PVC) name: kubectl get pvc

  3. Delete the Persistent Volume Claim created by Storage Plug-in for Containers: kubectl delete pvc <PVC_NAME>

  4. Check the storage class (SC) name: kubectl get sc

  5. Delete the storage class created by Storage Plug-in for Containers:

    kubectl delete sc <SC_NAME>
  6. Delete the plug-in pod: kubectl delete pod hspc-pod

  7. Delete the plug-in bind: kubectl delete clusterrolebinding hspc-bind

  8. Delete the plug-in service account: kubectl delete sa hspc-sa

  9. Remove the plug-in container image on all nodes:

    docker rmi hspc:latest hspc:1.0.0

Troubleshoot Storage Plug-in for Containers

This chapter describes the error messages that may be returned while running commands. It also provides guidelines for troubleshooting volume failover in a Docker framework.

Error codes

The "Error code information" table lists the error codes that may be returned while running commands. Each error code includes the error details and suggested workaround. Errors are recorded to the log file that is applicable to the framework option that your site implemented.

Error logs and location
Log fileLocation
Docker Swarm as the container orchestrator: hspc-d.log/opt/hitachi/hspc/log/
Kubernetes as the container orchestrator: hspc-k.log

Error code information
Error codeCauseSolution
HSPC0x00001002The storage device was locked by another user.Wait until the storage device is unlocked.
HSPC0x00001003The LDEV is already paired as a local pair.Delete the pair first.
HSPC0x00001004The LDEV is not defined.Check if the LDEV is defined.
HSPC0x00001005An invalid size was specified.Check if the size unit is any one of the following: g, m, k or t.
HSPC0x00001009Dynamic casting failed.Contact Hitachi Vantara support.
HSPC0x0000100dThe resource lock timed out.Wait until the resource is unlocked.
HSPC0x0000100eThe resource unlock timed out.Wait for a while, then try again.
HSPC0x0000100fA time out occurred while waiting for the REST API job complete state.Wait until the job completes.
HSPC0x00001010Unknown ASYNC API failure.Try again.
HSPC0x00001013Either there is no free LDEV in the storage, or the database of Hitachi Configuration Manager is not ready.Delete unnecessary LDEVs in the storage or refresh the Hitachi Configuration Manager database.
HSPC0x00001016A volume that is processing a clone or snapshot cannot be deleted.If the clone process is running, wait for the process to complete. If a snapshot of the volume is in process, delete the snapshot first using the "hctl snapshot rm" command, then try again.
HSPC0x00001017A time out occurred while waiting for SMPL.Wait until the pair status changes to SMPL.
HSPC0x00001018The specified volume cannot be removed.Check if the specified volume can be removed.
HSPC0x00001019The specified storage is not supported.Check if the specified storage is supported.
HSPC0x0000101aThe REST server is unavailable.Wait until the REST server is ready.
HSPC0x0000101bAn invalid HTTP status was received.Check if the REST server is operating normally.
HSPC0x00002004The config.json file does not exist.Check if config.json exists.
HSPC0x00002005A storage serial number is not specified in config.json.Specify a storage serial number in config.json.
HSPC0x00002006A storage IP address is not specified in config.json.Specify a storage IP address in config.json.
HSPC0x00002007A storage user is not specified in config.json.Specify a storage user in config.json.
HSPC0x00002008A storage password is not specified in config.json.Specify a storage password in config.json.
HSPC0x00002009A DP pool ID is not specified in config.json.Specify a DP pool ID in config.json.
HSPC0x0000200aA snapshot pool ID is not specified in config.json.Specify a snapshot pool ID in config.json.
HSPC0x0000200bA port ID is not specified in config.json.Specify a port ID in config.json.
HSPC0x0000200cA SCSI target number is not specified in config.json.Specify a SCSI target number in config.json.
HSPC0x0000200dA database (etcd) IP address is not specified in config.json.Specify a database (etcd) IP address in config.json.
HSPC0x0000200eThe config.json file contains an invalid key value combination.Check config.json for the key information and change it, if needed.
HSPC0x00003003No option is specified.Specify create options (for example: [normal] -o size=1G [clone] -o mode=clone -o sourceVolName=source).
HSPC0x00003004The specified create option is invalid.Specify the correct create option (for example: [normal] -o size=1G [clone] -o mode=clone -o sourceVolName=source).
HSPC0x00003005The specified create option key is invalid.Specify the correct create option (e.g. [normal] -o size=1G [clone] -o mode=clone -o sourceVolName=source).
HSPC0x00003006The size unit is missing from the create option.Specify the g, m, k or t unit for the size operation.
HSPC0x00003008The clone mode option cannot be used with the size option.Only specify "mode" and "sourceVolName" options for the clone mode.
HSPC0x0000300aThe source volume could not be found.Check the volume name.
HSPC0x0000300bThe specified volume name includes an invalid character (.\/)Delete the invalid character (.\/) from the volume name.
HSPC0x00004002Failed to open the specified device file.Check if the specified device file is valid.
HSPC0x00005004The volume rescan timed out.Wait until the host identifies the volume.
HSPC0x00005005The snapshot volume cannot be mounted.Specify the volume except for snapshot.
HSPC0x00005006The specified volume is already mounted to other paths.Specify a volume that can be mounted.
HSPC0x00005007The specified volume is already unmounted.Specify a volume that is not already unmounted.
HSPC0x00005008A device file with the specified LDEV ID could not be found.Specify a valid device file.
HSPC0x00005009The rescan-wait process timed out because the specified volume was not detected.Wait until the host identifies the volume.
HSPC0x0000500aAn unknown error occurred for the reference count.Contact Hitachi Vantara support.
HSPC0x0000500bThe LDEV nickname of the actual device does not match the nickname that is saved in the database.Contact Hitachi Vantara support.
HSPC0x00006003The specified snapshot pair belongs to a snapshot group that is not handled by Storage Plug-in for Containers.Specify a snapshot pair that can be handled by Storage Plug-in for Containers.
HSPC0x00006004The specified command line argument is invalid.Enter a valid command line argument.
HSPC0x00006005Required flags are either unspecified or they are set to default values.Set values for required flags.
HSPC0x00006006The volume is being referenced by one or more nodes.Unmount the volume before starting this task.
HSPC0x00007003The volume could not be found.Check if the volume name exists.
HSPC0x00007004The volume already exists.Check the volume name.
HSPC0x00007005The volume is being created, deleted, mounted or unmounted.Wait until the operation completes.
HSPC0x00008003The specified persistent volume cannot be removed.Specify a persistent volume that can be removed.
HSPC0x00008004The status of the specified persistent volume is invalid.Contact Hitachi Vantara support.
HSPC0x00008005Annotations could not be extracted from the persistent volume claim.Contact Hitachi Vantara support.
HSPC0x00008006The specified persistent volume is provisioned by another provisioner.Contact Hitachi Vantara support.
HSPC0x00008007The specified persistent volume reclaim policy is invalid.Set the persistent volume reclaim policy to DELETE.
HSPC0x00008008A storage serial number is not specified in the config file for StorageClass.Specify a storage serial number in the config file for StorageClass.
HSPC0x00008009A storage IP address is not specified in the config file for StorageClass.Specify a storage IP address in the config file for StorageClass.
HSPC0x0000800aA storage user is not specified in the config file for StorageClass.Specify a storage user in the config file for StorageClass.
HSPC0x0000800bA storage password is not specified in the config file for StorageClass.Specify a storage password in the config file for StorageClass.
HSPC0x0000800cA DP pool ID is not specified in the config file for StorageClass.Specify a DP pool ID in the config file for StorageClass.
HSPC0x0000800eA SCSI target number is not specified in the config file for StorageClass.Specify a SCSI target number in the config file for StorageClass.
HSPC0x00008010A storage (volume size) is not specified in the config file for PersistentVolumeClaim.Specify a storage (volume size) in the config file for PersistentVolumeClaim.
HSPC0x00008011An invalid serial number exists in the config file for StorageClass.Specify an integer for serial number in the config file for StorageClass.
HSPC0x00008012An invalid DP pool ID exists in the config file for StorageClass.Specify an integer for the DP pool ID in the config file for StorageClass.
HSPC0x00008014An invalid SCSI target ID exists in the config file for StorageClass.Check if the specified SCSI target ID is correct (for example: "CL1-A-1") in the config file for StorageClass.
HSPC0x00008015An invalid source volume ID exists in the config file for PersistentVolumeClaim.Specify an integer for the source volume ID in the config file for PersistentVolumeClaim.
HSPC0x00008016An invalid port type was detected.Check if the port type is either FC or iSCSI.
HSPC0x00008017An invalid value exists for the storage (volume size) in the config file for PersistentVolumeClaim.Check if the value for the storage (volume size) is correct in the config file for PersistentVolumeClaim.
HSPC0x00008018The password for the provided storage serial number and user is invalid.Check the password for the storage serial number and user.
HSPC0x00008019The specified LDEV is not defined.Check the storage if the specified LDEV is installed.
HSPC0x0000801aThe number of SCSI targets and IQNs do not match.Check StorageClass and specify the correct number of IQNs.
HSPC0x0000801bWhen using the SVP REST API, an error occurs while obtaining iSCSI target information from storage.Only use the Hitachi Configuration Manager for iSCSI targets.

Log file rotation

The log file size is checked every 30 minutes. If the log file size exceeds 100 MB, then log rotation is conducted and the rotated log file is named:

<current log file name>-2017-09-04-041503.

Format: <current log file name>-YYYY-MM-DD-hhmmss.

YYYY is the year, MM is the month, DD is the date, hh is the hour, mm is the minute, and ss is the second. For example, a log file titled log-09112017-105527.log was created on September 11, 2017 at 10:55:27.

Note Old logs are deleted if the total number of current and old log files is more than 10.
Example log output

2017/11/09 16:32:15.012[12345][0052][ERROR]docker-driver/hrd.(*StorageDevice).createNormalVolume(215) [HSPC0x00001005] invalid size specified : check if the size unit is either g, m or k

Log output details
ItemDescription

2017/08/09

Date : YYYY/MM/DD

16:32:15.012

Time : HH:MM:SS.mmm

[12345]

Process ID

[0052]

goroutine ID

[ERROR]

Log level. [INFO], [DEBUG] or [ERROR]

docker-driver/hrd.(*StorageDevice).createNormalVolume

Name of function in source code where error occured in case of [ERROR].

(215)

Line number of source code where error occured in case of [ERROR].

[HSPC0x00001005]

Error code

invalid size specified

Root cause of the error.

check if the size unit is either g, m or k

Possible solution for the error. <RootCause>:<PossibleSolution>

Volume failover and data recovery for Docker

When a host is down, persistent volumes are expected to failover to other hosts with containers. However, depending on the cause of the host failure, persistent volumes created by Storage Plug-in for Containers might not failover.

Storage Plug-in for Containers programmatically prevents a volume from being accessed from multiple nodes by using an internal parameter, "ReferenceCount". When a mount/unmount operation is called from Docker, this parameter adjusts accordingly. For example, when ReferenceCount is greater than 0, Storage Plug-in for Containers returns an error for the "mount" command from Docker to prevent multiple node access.

In certain situations, however, the ReferenceCount can be greater than 0 even though the volume is not actually in use. In this case, Storage Plug-in for Containers returns the error, [HSPC0x00005006], and the volume becomes unavailable.

If this scenario occurs at your site, complete the following steps to recover your data.

  1. Identify the volume you want to recover by locating the <volumeName>:
    #docker volume ls
  2. Create a clone:
    #docker volume create --name <newCloneName> -d hspc –o mode=clone –o sourceVolName=<volumeName>
  3. Run a container with the clone:
    #docker run –v <newCloneName>:/container/path <image>