Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Managing compute nodes and allocating volumes to compute nodes

Installing a compute node

A compute node is a node that runs the application of the user and instructs input/output of user data to the storage node. The compute node is connected to the storage system through the compute network.

After the connection is complete, confirm the following requirements and schedule the tasks. Then, register the information about the compute node according to Registering information about compute nodes in this manual.

The following table describes the requirements for compute nodes.

For details about how to enable ALUA, see Appendix E. ALUA configuration guidelines. For details about other items, see the documentation for your OS.

Item

Requirement

Supporting and enabling ALUA

Asymmetric logical unit access (ALUA) must be supported. In addition, ALUA must be enabled.

SCSI timeout setting

The SCSI timeout setting of the compute node must be 120 seconds or longer to prevent I/O operations from stopping if a node failure occurs.1

Multipath setting (Linux DM-Multipath)

Meet all of the following:

  • The retry count setting (no_path_retry) for the same path is 6 or more.

  • The path polling interval (polling_interval) setting value is 30 or more.

  • The setting value of fast_io_fail_tmo is other than off.2

  • The setting value of dev_loss_tmo is the OS maximum value.3

  • The failback setting is "immediate".4

  • The path_checker setting is readsector0.5

1. The SCSI Timeout setting value is the time until the OS determines that a SCSI command has not responded. When the OS detects no response to the SCSI command, it performs recovery processing for the failed path. If the failed path cannot be recovered by the recovery process, the path is blocked and switched to another path. During the recovery process, I/O operations for the relevant path are stopped. Therefore, in the application layer timeout design, it is necessary to consider the recovery processing time in addition to the SCSI Timeout setting value.

2. The default value for fast_io_fail_tmo is 5. When set to off, path switching does not occur until dev_loss_tmo seconds have elapsed when a path failure occurs, and it does not operate as expected.

3. The maximum value depends on your distribution. See your distribution manual for the maximum value.

4. Set the failback policy ("failback") of the path group to "immediate" (immediate failback). If the failback policy is not set to "immediate", I/O will still be issued to the non-priority path after recovery from a path fault, thus requiring a manual path switch operation.

5. If a failure occurs in a storage node of Virtual Storage Software block, when a value other than readsector0 is set, paths might be wrongly blocked. For this reason, make sure that you set readsector0.

Virtual Storage Software block supports the following VMware vStorage APIs for Array Integration (VAAI) primitives.

VAAI primitive

Description

Support status

Block Zeroing

When a virtual machine is created, blocks are formatted (zeroed out). This primitive enables the storage to perform this procedure, speeding up provisioning of virtual machines. When used with the thin provisioning feature, the primitive can free the block areas that are usually reserved when a virtual machine is created, enabling efficient use of disk capacity.

Supported

Hardware Assisted Locking

If multiple virtual machines share one VMFS volume in your environment, SCSI reservation conflicts might occur and performance might degrade when Storage vMotion or a virtual machine is turned on. With this primitive, you can avoid such a problem. The primitive offers efficient locking.

Supported

Full Copy

Traditionally, VMware ESXi copy data between volumes. This primitive offloads the operation to the storage, greatly reducing the time required for virtual machine cloning and Storage vMotion processing.

Not supported

Registering information about compute nodes (CLI or REST API)

Register information about compute nodes.

The maximum number of registered compute nodes is 1024 per protection domain. This maximum does not change even if storage nodes are added or removed.

When using the GUI

You can use the GUI to perform the tasks described in this section unless the registration destination is a VPS.

For the procedure and details, see the following in the Hitachi Virtual Storage Software Block GUI Guide.

  • Registering compute nodes

  • When registering compute nodes to a VPS: Scope of the VPS

Before you begin

Required role: Storage

Procedure

  1. When registering a compute node to a VPS, verify the registration-target VPS ID and conditions set for the VPS (upper limit for the number of compute nodes).

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Register information about compute nodes.

    Run either of the following commands with the nicknames of the intended compute nodes and the type of the OS running on the compute nodes specified.

    Conventions to be followed when setting a nickname

    • Number of characters: 1 to 229

    • Characters that can be used: Numbers (0 to 9), uppercase alphabet (A to Z), lowercase alphabet (a to z), symbols (\ . : @ _) for the first character. In addition to these, a hyphen (-) can be used for the second and subsequent characters.

    • Each compute node must have a unique nickname.

    REST API: POST /v1/objects/servers

    CLI: server_create

    Verify the job ID which is displayed after the command is run.

  3. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobID>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  4. Obtain a list of compute nodes and verify that the information about the intended compute nodes is registered.

    REST API: GET /v1/objects/servers

    CLI: server_list

  5. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Obtaining a list of information about compute nodes (CLI or REST API)

The following information can be obtained.

  • id: IDs (uuid) of compute nodes

  • nickname: Nicknames of compute nodes

  • osType: OS types of compute nodes

  • totalCapacity: Total capacity of the volumes on the storage pool allocated to the compute node

  • usedCapacity: Consumed amount of the volumes on the storage pool allocated to the compute node

  • numberOfPaths: Number of registered paths

  • vpsId: ID of the VPS to which compute nodes belong

  • vpsName: Name of the VPS to which compute nodes belong

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Obtain a list of information about compute nodes.

    REST API: GET /v1/objects/servers

    CLI: server_list

Obtaining information about individual compute nodes (CLI or REST API)

The following information can be obtained for the compute node with the ID specified.

  • numberOfVolumes: Number of allocated volumes

  • paths: List of information about registered paths (WWN or iSCSI name of the initiator for the intended compute node, list of IDs of compute ports of targets with which the applicable initiator negotiates)

  • id: ID (uuid) of the intended compute node

  • nickname: Nickname of the intended compute node

  • osType: OS type of the intended compute node

  • totalCapacity: Total capacity of the volumes on the storage pool allocated to the compute node

  • usedCapacity: Consumed amount of the volumes on the storage pool allocated to the compute node

  • numberOfPaths: Number of registered paths

  • vpsId: ID of the VPS to which the intended compute node belongs

  • vpsName: Name of the VPS to which the intended compute node belongs

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  2. Obtain information about the intended compute node.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>

    CLI: server_show

Editing information about compute nodes (CLI or REST API)

Edit information about the intended compute node. You can edit the nickname and OS type of the intended compute node.

Each compute node must have a unique nickname.

Caution

While the information about a compute node is edited, I/O processing to a volume specified by a volume path in the compute node is temporarily stopped.

Before you begin

  • Required role: Storage

  • When editing compute nodes belonging to a VPS: Scope of the VPS

Procedure

  1. When editing a compute node that belongs to a VPS, verify the VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. Edit information about the intended compute node.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: PATCH /v1/objects/servers/<id>

    CLI: server_set

    Verify the job ID which is displayed after the command is run.

  4. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobId>

    CLI: job_show

    After running the command, if you receive a response indicating "Succeeded" as the state, the job is completed.

  5. Obtain a list of compute nodes and verify that the information about the intended compute node is edited.

    REST API: GET /v1/objects/servers

    CLI: server_list

  6. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Deleting information about individual compute nodes (CLI or REST API)

Delete information about the intended compute node. Deleting the compute node information also deletes all compute node initiator information, all compute node path information, and all volume path information.

Before you begin

Required role: Storage

Procedure

  1. When deleting a compute node that belongs to a VPS, verify the VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. Delete information about the intended compute node.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: DELETE /v1/objects/servers/<id>

    CLI: server_delete

    Verify the job ID which is displayed after the command is run.

  4. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobID>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  5. Obtain a list of compute nodes and verify that the information about the intended compute node is deleted.

    REST API: GET /v1/objects/servers

    CLI: server_list

  6. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

    If tasks are scheduled according to Scheduling the tasks in this manual, go to the next step. If no tasks are scheduled, this completes the procedure.

  7. From the Start menu, select [Administrative Tools], and then [Task Scheduler].

    The Task Scheduler window appears.

  8. Select [Task Scheduler Library] in the left side of the window, select the task created in Scheduling the tasks, and then click [Delete] from the right-click menu.

    Delete all the tasks created in Scheduling the tasks.

  9. Close the Task Scheduler window.

Registering information about the initiators for compute nodes (CLI or REST API)

After operating the compute node to investigate the initiator name (iSCSI name or WWN) of the compute node, register the compute node initiator information in Virtual Storage Software block from the controller node.

You can use the GUI to perform the tasks described in this section unless the registration destination is a VPS. For the procedure and details, see the following in the Hitachi Virtual Storage Software Block GUI Guide.

The maximum number of registered initiators is 4 per compute node.

The procedure differs depending on whether iSCSI connection or FC connection is made between the compute node and the storage node.

In the case of iSCSI connection (CLI or REST API)

Caution

The initiator name (iSCSI name) of the compute node to be registered with Virtual Storage Software block must be unique in the system. Verify it before registration. It is impossible to see the volume from a compute node whose initiator name (iSCSI name) is the same as that of another compute node.

Also, "iqn" and "eui" at the beginning of the initiator name (iSCSI name) cannot be uppercase. It can only be specified in lower case.

Before you begin

  • Required role: Storage

  • When registering initiators to a VPS: Scope of the VPS

  • The information about the applicable compute node must be registered beforehand.

Procedure

  1. Verify the initiator name (iSCSI name) of the applicable compute node.

    For details, see the documentation for the OS used on the compute node.

  2. Verify that the initiator name (iSCSI name) verified in step 1 is not the same as the initiator name (iSCSI name) of another compute node.

    If they are the same, change the initiator name (iSCSI name).

  3. When registering initiators to a VPS, verify the VPS ID and conditions set for the VPS (upper limit for the number of initiators).

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  4. Verify the ID of the applicable compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  5. Register information about the intended initiator.

    Run either of the following commands with the ID of the compute node, connection protocol for the initiator, and iSCSI name of the initiator.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: POST /v1/objects/servers/<id>/hbas

    CLI: hba_create

    Verify the job ID which is displayed after the command is run.

  6. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    GET /v1/objects/jobs/<jobId>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  7. Obtain a list of information about initiators and verify that the information about the intended initiator is registered.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  8. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

In the case of FC connection (CLI or REST API) (Virtual machine)

Before you begin

  • Required role: Storage

  • When registering initiators to a VPS: Scope of the VPS

  • The information about the applicable compute node must be registered beforehand.

Procedure

  1. Verify WWN in documentation and so on, from each HBA vendor.

  2. When registering initiators to a VPS, verify the VPS ID and conditions set for the VPS (upper limit for the number of initiators).

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  3. Verify the ID of the applicable compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  4. Register information about the intended initiator.

    Run either of the following commands with the ID of the compute node, connection protocol for the initiator, and the initiator name (WWN) specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: POST /v1/objects/servers/<id>/hbas

    CLI: hba_create

    Verify the job ID which is displayed after the command is run.

  5. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    GET /v1/objects/jobs/<jobId>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  6. Obtain a list of information about initiators and verify that the information about the intended initiator is registered.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  7. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Obtaining a list of information about initiators for compute nodes (CLI or REST API)

The following information can be obtained.

  • id: IDs (uuid) of initiators

  • serverId: IDs (uuid) of compute nodes

  • name: WWN or iSCSI names of initiators

  • protocol: Connection protocols for initiators

  • portIds: List of IDs (uuid) of compute ports of targets with which initiators negotiate

  • vpsId: ID of the VPS to which initiators belong

  • vpsName: Name of the VPS to which initiators belong

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the IDs of compute nodes.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  2. Obtain a list of information about initiators.

    Run either of the following commands with a compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

Obtaining information about individual initiators for compute nodes (CLI or REST API)

The following information can be obtained for the initiator with the specified ID.

  • id: ID (uuid) of the intended initiator

  • serverId: ID (uuid) of the applicable compute node

  • name: WWN or iSCSI names of initiators

  • protocol: Connection protocol for the intended initiator

  • portIds: List of IDs (uuid) of compute ports of targets with which the intended initiator negotiates

  • vpsId: ID of the VPS to which the intended initiator belongs

  • vpsName: Name of the VPS to which the intended initiator belongs

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the ID of the applicable compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  2. Verify the ID of the intended initiator.

    If you use the CLI to specify an initiator by WWN or iSCSI name, check the WWN or iSCSI name of the initiator.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the COMPUTE node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  3. Obtain information about the intended initiator.

    Specify the ID of the applicable compute node and ID of the intended initiator, and run either of the following commands.

    If you use the CLI, you can specify a nickname instead of the compute node ID, or a WWN or iSCSI name instead of the initiator ID.

    REST API: GET /v1/objects/servers/<id>/hbas/<hbaId>

    CLI: hba_show

Deleting information about the initiators for compute nodes (CLI or REST API)

Delete the initiator information of a compute node. Deleting information about an initiator also deletes the path information of all the related compute nodes.

Before you begin

Required role: Storage

Procedure

  1. When deleting initiators that belong to a VPS, verify the VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the applicable compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. Verify the ID of the intended initiator.

    If you use the CLI to specify an initiator by WWN or iSCSI name, check the WWN or iSCSI name of the initiator.

    Specify the ID of the applicable compute node and run either of the following commands.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  4. Delete information about the intended initiator.

    Specify the ID of the applicable compute node and ID of the intended initiator, and run either of the following commands.

    If you use the CLI, you can specify a nickname instead of the compute node ID, or a WWN or iSCSI name instead of the initiator ID.

    REST API: DELETE /v1/objects/servers/<id>/hbas/<hbaId>

    CLI: hba_delete

    Verify the job ID which is displayed after the command is run.

  5. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobId>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  6. Obtain a list of information about initiators and verify that the information about the intended initiator is deleted.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  7. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Registering path information about compute nodes (CLI or REST API)

The maximum number of registered compute node paths is 4096 per compute node.

Depending on how parameters portId and hbaId are specified, the following are possible:

  • Both portId and hbaId are omitted: All the initiators of the intended compute node are allocated to all the compute ports.

  • Only hbaId is specified: The specified initiator is allocated to all the compute ports.

  • Only portId is specified: The specified compute port is allocated to all the initiators of the intended compute nodes.

  • Both portId and hbaId are specified: The initiator specified by hbaId is allocated to the compute port specified by portId.

Caution
  • If you have changed the path information of a compute node, perform a rescan of the storage in that compute node before you start I/O operations.

  • If using VMware ESXi as a compute node, set up a path between the compute node and all computer ports. Unless all paths have been set, some volumes might be invisible from a compute node.

    If not using VMware ESXi as a compute node, it is also recommended to set up a path between the compute node and all computer ports to prevent I/O performance from being deteriorated.

Before you begin

  • Required role: Storage

  • The information about the intended compute node and its initiator must be registered beforehand.

  • When registering compute node path information to a VPS: Scope of the VPS

Procedure

  1. When registering compute node path information to a VPS, verify the registration-target VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. When you specify the hbaId parameter, verify the ID of the initiator for the intended compute node.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  4. When you specify the portId parameter, verify the ID of the compute port to be allocated to the intended compute node.

    REST API: GET /v1/objects/ports

    CLI: port_list

  5. Register compute node path information.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID. You can also specify the compute port of the target behavior to which it is assigned by WWN or iSCSI Name instead of the ID.

    REST API: POST /v1/objects/servers/<id>/paths

    CLI: path_create

    Verify the job ID which is displayed after the command is run.

  6. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobID>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  7. Obtain a list of path information and verify that the intended path information is added.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/paths

    CLI: path_list

  8. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Obtaining a list of path information about compute nodes (CLI or REST API)

The following information can be obtained.

  • id: Path IDs (character string of initiator ID of the computer node and ID of the compute port for the target operation connected by a comma)

  • serverId: IDs (uuid) of compute nodes

  • hbaName: WWNs or iSCSI names for compute nodes

  • hbaId: IDs (uuid) of initiators for compute nodes

  • portId: IDs (uuid) of compute ports of targets with which initiators negotiate

  • portName: WWNs or iSCSI names of compute ports of targets with which initiators negotiate

  • portNickname: Nicknames of compute ports of targets with which initiators negotiate

  • vpsId: ID of the VPS to which compute node paths belong

  • vpsName: Name of the VPS to which compute node paths belong

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the IDs of compute nodes.

    REST API: GET /v1/objects/servers

    CLI: server_list

  2. Obtain a list of path information about compute nodes.

    Run either of the following commands with a compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/paths

    CLI: path_list

Obtaining specific path information about compute nodes (CLI or REST API)

The following information can be obtained for the path information about compute nodes with the specified ID.

  • id: Path IDs (character string of initiator ID of the computer node and ID of the compute port for the target operation connected by a comma)

  • serverId: IDs (uuid) of compute nodes

  • hbaName: WWNs or iSCSI names of initiators for compute nodes

  • hbaId: IDs (uuid) of initiators for compute nodes

  • portId: IDs (uuid) of compute ports of targets with which initiators negotiate

  • portName: WWNs or iSCSI names of compute ports of targets with which initiators negotiate

  • portNickname: Nicknames of compute ports of targets with which initiators negotiate

  • vpsId: ID of the VPS to which the compute node path belongs

  • vpsName: Name of the VPS to which the compute node path belongs

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the ID of the applicable compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  2. Verify the ID of the initiator for the applicable compute node and the ID of the compute port.

    If you use the CLI to specify initiators and compute ports by WWN or iSCSI name, check the WWN or iSCSI name of the initiator and compute port.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/hbas

    CLI: hba_list

  3. Obtain path information about the applicable compute node.

    Run either of the following commands with the ID of the compute node, ID of the initiator, and ID of the compute port for the target operation specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID, a WWN or iSCSI name instead of the initiator ID, and a WWN or iSCSI name instead of the compute port ID.

    REST API: GET /v1/objects/servers/<id>/paths/<hbaId>,<portId>

    CLI: path_show

Deleting path information about compute nodes (CLI or REST API)

Caution
  • When compute nodes are clustered and the volumes recognized by the compute nodes are online, set the intended volume offline and then delete path information.

  • Before you delete path information from a compute node, verify whether the volumes that can be accessed from the compute node are in SCSI-2 Reserve status or SCSI-3 Persistent Reserve status. If the volumes are in either status, release them from the status, and then delete path information.

  • When you change the path information of a compute node, perform a rescan of the storage on that compute node. If the path information already deleted remains on the compute node, it might cause malfunction.

Before you begin

  • Required role: Storage

  • When deleting path information of the compute nodes belonging to a VPS: Scope of the VPS

Procedure

  1. When deleting path information about compute nodes that belong to a VPS, verify the VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the applicable compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. Obtain a list of path information about compute nodes.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/paths

    CLI: path_list

  4. Delete path information from the applicable compute node.

    Run either of the following commands with the ID of the compute node, ID of the initiator, and ID of the compute port for the target operation specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID, a WWN or iSCSI name instead of the initiator ID, and a WWN or iSCSI name instead of the compute port ID.

    REST API: DELETE /v1/objects/servers/<id>/paths/<hbaId>,<portId>

    CLI: path_delete

    Verify the job ID which is displayed after the command is run.

  5. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobId>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  6. Obtain a list of path information and verify that the intended path information is deleted.

    Run either of the following commands with the compute node ID specified.

    If you use the CLI, you can specify a nickname instead of the compute node ID.

    REST API: GET /v1/objects/servers/<id>/paths

    CLI: path_list

  7. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Allocating volumes to compute nodes (CLI or REST API)

Set paths (volume paths) between volumes and compute nodes.

The maximum number of volume paths that can be registered is as follows: 8,192 per compute node; 2,048 per storage controller; and 65,536 per storage cluster.

The maximum number of registered volume paths is 8,192 per compute node, and 65,536 per storage cluster.

The following two combinations are possible for specifying parameters.

  • Combination of volumeId, serverId, and lun: Allocates a volume specified by volumeId to a compute node specified by serverId (lun is optional).

  • Combination of volumeIds, serverIds, and startLun: Allocates all the volumes specified with volumeIds to all the compute nodes specified with serverIds (startLun can be omitted).

Caution
  • When using VMware ESXi as a compute node, set up a path between the compute node and all computer ports when registering path information of the compute node. Unless all paths have been set, some volumes might be invisible from a compute node.

    Even when not using VMware ESXi as a compute node, we recommend that you set up a path between the compute node and all computer ports to prevent deterioration in I/O performance.

    For how to set up a path, see Registering path information about compute nodes.

  • If you have changed the connection information between a volume and compute node, perform a rescan of the storage in that compute node before you start I/O operations.

  • If you specify a volume path by omitting lun or startLun, the smallest LUN number that is not used is automatically allocated. However, if multiple volume paths are set without specifying lun at the same time, or if volume paths are set for multiple volumes, LUNs might not be assigned in ascending order they are set. To assign a specific LUN, set a volume path by specifying lun.

  • If some volumes cannot be recognized by the OS, the system might behave as follows. Resolve the state where volumes cannot be recognized, and then perform a rescan of the storage on the compute node.

    • During recognition of LUNs in the order from the smallest, if an unrecognizable LUN exists, LUN recognition stops.

    • When LUN=0 cannot be recognized, LUN recognition stops.

  • Processing performance when registering volume paths

    The processing time varies depending on the number of volume paths registered on the applicable compute node.

    • When registering the first volume path, the processing completes in one or two seconds.

    • For every 1,000 registered volume paths, the processing time for registering one volume path will increase by 1 to 2 seconds.

    • When registering the 8,192th volume path (upper limit), the processing takes approximately 15 seconds.

  • If any of the following occurs, volumes might not be recognized from the host.

    • Capacity depletion of storage controller

    If volumes cannot be recognized by the host, verify if KARS06003-E is issued. If either event log is issued, resolve the problem with the storage controller according to the indicated action.

Before you begin

  • Required role: Storage

  • The intended volume must be created and the information about the intended compute node must be registered beforehand.

  • When registering volume paths to a VPS: Scope of the VPS

Procedure

  1. When registering a volume path to a VPS, verify the registration-target VPS ID and conditions set for the VPS (upper limit for the number of volume paths).

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the volume to be allocated to the compute node.

    If you use the CLI to specify a volume by name, check the name of the volume.

    REST API: GET /v1/objects/volumes

    CLI: volume_list

  3. Verify the ID of the compute node to which the volume is to be allocated.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  4. Allocate the volume to the compute node.

    REST API: POST /v1/objects/volume-server-connections

    CLI: volume_server_connection_create

    Verify the job ID which is displayed after the command is run.

  5. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobId>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  6. Obtain a list of information about allocation of volumes to compute nodes, and verify that the intended volume is allocated to the intended compute node.

    REST API: GET /v1/objects/volume-server-connections

    CLI: volume_server_connection_list

  7. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Obtaining a list of information about allocation of volumes to compute nodes (CLI or REST API)

The following information can be obtained.

  • id: ID of volume paths (each ID is a string consisting of a volume ID and a compute node ID connected by a Comma (,))

  • serverId: IDs (uuid) of compute nodes

  • volumeId: IDs (uuid) of volumes

  • lun: LUN

  • vpsId: ID of the VPS to which volume paths belong

  • vpsName: Name of the VPS to which volume paths belong

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the IDs of volumes.

    REST API: GET /v1/objects/volumes

    CLI: volume_list

  2. Verify the IDs of compute nodes.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. Obtain a list of information about allocation of volumes to compute nodes.

    Run one of the following commands with the volume ID and compute node ID specified in the query parameter.

    REST API: GET /v1/objects/volume-server-connections

    CLI: volume_server_connection_list

Obtaining information about allocation of individual volumes to individual compute nodes (CLI or REST API)

Obtain the information about allocation of volumes to compute nodes with the specified volume ID and compute node ID.

  • id: ID of volume paths (a string consisting of a volume ID and a compute node ID connected by a Comma (,))

  • serverId: ID of the VPS to which the volume path belongs

  • volumeId: Name of the VPS to which the volume path belongs

  • lun: LUN

  • vpsId: ID of a virtual private storage (VPS) to which volume paths belong

  • vpsName: Name of a virtual private storage (VPS) to which volume paths belong

Before you begin

Required role: Security, Storage, Monitor, Service, or Resource

Procedure

  1. Verify the ID of the intended volume.

    If you use the CLI to specify a volume by name, check the name of the volume.

    REST API: GET /v1/objects/volumes

    CLI: volume_list

  2. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  3. Obtain information about allocation of the intended volume to the intended compute node.

    Specify the ID of the intended volume and ID of the intended compute node, and run either of the following commands.

    If you use the CLI, you can specify a name instead of the ID of the volume, but a nickname instead of the ID of the compute node.

    REST API: GET /v1/objects/volume-server-connections/<volumeId>,<serverId>

    CLI: volume_server_connection_show

Canceling allocation of volumes to compute nodes (CLI or REST API)

Disconnects by removing the path (volume path) between the volume and the compute node. Before you run the command, verify that no I/O operation is being performed between the intended compute node and the intended volume.

Caution
  • When compute nodes are clustered and the volumes recognized by the compute nodes are online, set the intended volume offline and then cancel allocation of the volume.

  • Before you cancel allocation of a volume to a compute node, verify whether the volumes that can be accessed from the compute node are in SCSI-2 Reserve status or SCSI-3 Persistent Reserve status. If the volumes are in either status, release them from the status, and then cancel allocation of the intended volume.

  • When you change the path information of a compute node, perform a rescan of the storage on that compute node. If the path information already deleted remains on the compute node, it might cause malfunction.

Before you begin

  • Required role: Storage

  • When removing volume paths belonging to a VPS: Scope of the VPS

Procedure

  1. When removing volume paths that belong to a VPS, verify the VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the ID of the intended volume.

    If you use the CLI to specify a volume by name, check the name of the volume.

    REST API: GET /v1/objects/volumes

    CLI: volume_list

  3. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  4. Obtain a list of information about allocation of volumes to compute nodes.

    Run one of the following commands with the volume ID and compute node ID specified in the query parameter.

    If you use the CLI, you can specify a name instead of the ID of the volume, but a nickname instead of the ID of the compute node.

    REST API: GET /v1/objects/volume-server-connections

    CLI: volume_server_connection_list

  5. Cancel allocation of the volume to the compute node.

    Specify the ID of the intended volume and ID of the intended compute node, and run either of the following commands.

    If you use the CLI, you can specify a name instead of the ID of the volume, but a nickname instead of the ID of the compute node.

    REST API: DELETE /v1/objects/volume-server-connections/<volumeId>,<serverId>

    CLI: volume_server_connection_delete

    Verify the job ID which is displayed after the command is run.

  6. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobId>

    CLI: job_show

    If the job state is "Succeeded", the job is completed.

  7. Obtain a list of information about allocation of volumes to compute nodes and verify that the intended allocation is canceled.

    REST API: GET /v1/objects/volume-server-connections

    CLI: volume_server_connection_list

  8. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Releasing multiple connections between the volumes and compute nodes (CLI or REST API)

Releases the connections by removing all specified volumes and the path (volume path) of all specified compute nodes. Before you execute the operation, verify that no I/O operation is being performed between the intended compute node and the intended volume.

Caution
  • When compute nodes are clustered and the volumes recognized by the compute nodes are online, set the intended volume offline, and then cancel allocation of the volume.

  • Before you cancel allocation of a volume to a compute node, verify whether the volumes that can be accessed from the compute node are in SCSI-2 Reserve status or SCSI-3 Persistent Reserve status. If the volumes are in either status, release them from the status, and then cancel allocation of the intended volume.

  • When you change the path information of a compute node, perform a rescan of the storage on that compute node. If the path information already deleted remains on the compute node, it might cause malfunction.

Before you begin

  • Required role: Storage

  • When removing volume paths belonging to a VPS: Scope of the VPS

Procedure

  1. When removing volume paths that belong to a VPS, verify the VPS ID.

    If you want to specify a VPS by its name in the CLI, verify the VPS name.

    REST API: GET /v1/objects/virtual-private-storages

    CLI: vps_list

  2. Verify the IDs of volumes.

    If you use the CLI to specify a volume by name, check the name of the volume.

    REST API: GET /v1/objects/volumes

    CLI: volume_list

  3. Verify the ID of the intended compute node.

    If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.

    REST API: GET /v1/objects/servers

    CLI: server_list

  4. Cancel allocation of the volumes to the compute nodes.

    Specify the IDs of the volumes and IDs of the compute node (for which you want to cancel allocation), and then run either of the following commands.

    If you use the CLI, you can specify a name instead of the ID of the volume, but a nickname instead of the ID of the compute node.

    REST API: POST /v1/objects/volume-server-connections/actions/release/invoke

    CLI: volume_server_connection_release_connections

    Verify the job ID which is displayed after the command is run.

  5. Verify the state of the job.

    Run either of the following commands with the job ID specified.

    REST API: GET /v1/objects/jobs/<jobId>

    CLI: job_show

    After running the command, if you receive a response indicating "Succeeded" as the state, the job is completed.

  6. Obtain a list of information about allocation of volumes to compute nodes and verify that the intended allocation is canceled.

    REST API: GET /v1/objects/volume-server-connections

    CLI: volume_server_connection_list

  7. Back up the configuration information.

    Perform this step by referring to Backing up the configuration information.

    If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.

Managing compute nodes

Overview

The following table describes the operations that you can perform for compute nodes.

For details about the procedure flow and prerequisites for operating a compute node, contact customer support.

This manual describes the procedure that can be performed by using the GUI.

Operation

Window to operate on

Operation icon

Dialog

Registering compute nodes

Compute Nodes window

For List view:

GUID-C7693379-8F7D-48EB-BE90-215912A0B620-low.png

For Inventory view:

GUID-F8EF01CF-A5F4-483D-A5A5-5A201E114537-low.png

Register Compute node

Editing compute nodes

Compute Nodes window

Compute Node detailed information window

GUID-CC9ADA80-83D0-4F62-924F-03EC4A376BDB-low.png

Edit Compute node

Deleting compute nodes

Compute Nodes window

Compute Node detailed information window

GUID-9F7B7CBB-407A-4873-8E0E-B636E2D3CB6C-low.png

Delete Compute nodes

Reconnecting compute nodes and all the compute ports in full mesh

Compute Node detailed information window

GUID-E32D181E-D522-4EBF-B179-2E09F198E662-low.png *

Configure Port Connections

* This icon appears on the Compute Node detailed information window when the applicable compute node is not connected to the compute ports in full mesh.

In this manual, full mesh means that path information has been configured for all combinations of initiators and compute ports of a compute node.

Registering compute nodes (GUI)

Register a compute node, initiator information, and path information about the compute node as follows.

The path of the compute node is configured in full mesh.

Caution

Performing the following procedure enables setting a path of a compute node in full mesh without physical wiring. If you want to set a compute node path between only a specific initiator and specific compute node, do so from the REST API or CLI. For details about how to set a path of a compute node by using the REST API or CLI, see Compute node connection management in the Hitachi Virtual Storage Software Block REST API Reference and Hitachi Virtual Storage Software Block CLI Reference.

Before you begin

Required role: Storage

Procedure

  1. Open the Compute Nodes window, and click either of the following icons.

    For List view:

    GUID-C7693379-8F7D-48EB-BE90-215912A0B620-low.png

    For Inventory view:

    GUID-F8EF01CF-A5F4-483D-A5A5-5A201E114537-low.png

    The following dialog appears.

    GUID-87A2279F-E3A1-41EE-A21B-8E4061DA91FE-low.png
  2. Enter each of the following parameters:

    • COMPUTE NODE NAME: The nickname of the compute node

    • OS TYPE: The OS type of the intended compute node

    • WWNS:

      (Virtual machine) WWN in the case of FC connection

      (Bare metal) Not used. Do not enter this parameter.

    • ISCSI INITIATOR NAMES: iSCSI name in the case of iSCSI connection

    Clicking + Add WWN or + Add iSCSI Initiator Name displays an additional input field. To the right of the input field, there is an x icon. To remove the entry field, click the x icon.

  3. Click Submit.

    If you want to continue registering other compute nodes, click +Submit and add another compute node.

  4. When the following "Completed" message is displayed, processing is completed.

    • Successfully configured port connections.

Editing compute nodes (GUI)

Edit information about the intended compute node as follows.

Before you begin

  • Required role: Storage

Procedure

  1. In the Compute Nodes window or the Compute Node detailed information window, edit the information by using one of the following methods:

    GUID-CC9ADA80-83D0-4F62-924F-03EC4A376BDB-low.png

    • In the Compute Nodes window, select the editing-target compute node (one node), and then click the preceding icon shown to the right of Select All.

    • Change the view mode of the Compute Nodes window to Inventory view, and then click the preceding icon shown for the editing-target compute node.

    • In the Compute Node detailed information window for the editing-target compute node, click the preceding icon.

    The following dialog appears.

    GUID-E4FF7393-9F8C-448A-BB28-F19639A8E025-low.png
  2. You can edit the following parameters:

    • COMPUTE NODE NAME: The nickname of the compute node

    • OS TYPE: The OS type of the intended compute node

    • WWNS:

      (Virtual machine) WWN in the case of FC connection

      (Bare metal) Not used. Do not enter this parameter.

    • ISCSI INITIATOR NAMES: The iSCSI name in the case of iSCSI connection

    Clicking + Add WWN or + Add iSCSI Initiator Name displays an additional input field. To the right of the input field, there is an x icon. To remove the entry field, click the x icon.

  3. Click Submit.

  4. When one of the following "Completed" messages is displayed, processing is completed.

    • When you modified COMPUTE NODE NAME or OS TYPE: Successfully edited compute node info. (Compute node Name: XXX)

    • When you added or modified the HBA information: Successfully configured port connections.

    • When you only deleted the HBA information: Successfully deleted initiator of compute node.

Deleting compute nodes (GUI)

Delete information about the intended compute node as follows. Deleting the compute node information also deletes all of the compute node initiator information and all of the compute node path information.

Before you begin

  • Required role: Storage

Procedure

  1. In the Compute Nodes window or the Compute Node detailed information window, delete the compute node by using one of the following methods:

    GUID-9F7B7CBB-407A-4873-8E0E-B636E2D3CB6C-low.png

    • In the Compute Nodes window, select the deletion-target compute nodes (1 to 25 nodes), and then click the preceding icon shown to the right of Select All.

    • Change the view mode of the Compute Nodes window to Inventory view, and then click the preceding icon shown for each deletion-target compute node.

    • In the Compute Node detailed information window for each deletion-target compute node, click the preceding icon.

    The following dialog appears.

    GUID-8DCDA4F2-8724-4C36-B81B-DBA9D9F9AFB8-low.png
  2. Click Submit.

  3. When the following "Completed" message is displayed, processing is completed.

    • Successfully deleted compute nodes.

Reconnecting compute nodes and all the compute ports in full mesh (GUI)

Reconnect compute nodes and all the compute ports in full mesh as follows.

A compute node not in a full-mesh connection is marked as (not fullmeshed) in the PORT CONNECTIONS item on the Compute Node detailed information window. In this case, you can perform the following operations:

Caution

Performing the following procedure enables setting a path of a compute node in full mesh without physical wiring. If you want to set a compute node path between only a specific initiator and specific compute node, do so from the REST API or CLI. For details about how to set a path of a compute node by using the REST API or CLI, see Compute node connection management in the Hitachi Virtual Storage Software Block REST API Reference and Hitachi Virtual Storage Software Block CLI Reference.

Before you begin

  • Required role: Storage

Procedure

  1. In the Compute Node detailed information window, click the following icon.

    GUID-E32D181E-D522-4EBF-B179-2E09F198E662-low.png

    The following dialog appears.

    GUID-A4A738A1-E0FB-4DAC-8985-8FED81D60775-low.png
  2. Click Submit.

  3. When the following "Completed" message is displayed, processing is completed.

    • Successfully configured port connections.

Allocating and canceling allocation of volumes to and from compute nodes

Creating volumes and allocating them to compute nodes (GUI)

Create volumes and allocate them to compute nodes.

Before you begin

  • Required role: Storage

Procedure

  1. In the Compute Nodes window or the Compute Node detailed information window, allocate volumes to compute nodes by using one of the following methods:

    GUID-6A2ADB06-A9F2-4069-AE13-7E96418A1A3A-low.png

    • In the Compute Nodes window, select the allocation-target compute nodes (1 to 100 nodes), click the preceding icon shown to the right of Select All, and then, from the menu that appears, select Create and Attach Volumes.

    • Change the view mode of the Compute Nodes window to Inventory view, click the preceding icon shown for each allocation-target compute node, and then, from the menu that appears, select Create and Attach Volumes.

    • In the Compute Node detailed information window for each allocation-target node, click the preceding icon, and then, from the menu that appears, select Create and Attach Volumes.

    The following dialog appears.

    GUID-F8A560DE-0CA6-4B2C-A44D-447F674BA4F1-low.png
  2. Enter each of the following parameters:

    • CAPACITY: Logical capacity of the volume and its unit
    • NUMBER OF VOLUMES: The number of volumes to be created
    • VOLUME NAME: The name of the volume
    • SUFFIX START NUMBER: The first sequential number suffixed to a volume name or nickname, in the case multiple volumes are created with the same volume name or nickname. If omitted, no number is added.
    • NUMBER OF DIGITS: The number of digits of a number suffixed to a name or nickname
    • VOLUME NICKNAME: The nickname of the volume. If omitted, VOLUME NAME is used.
    • START LUN: LUN start number

      If you specify the LUN start number, unused LUN numbers are allocated in ascending order from the specified start number. If omitted, unused LUNs are allocated automatically in ascending order.

  3. Click Submit.

  4. When the following "Completed" message is displayed, processing is completed.

    • Successfully attached volumes to compute nodes.

Allocating volumes to compute nodes (Volume) (GUI)

Set paths (volume paths) between volumes and compute nodes.

The following describes the steps to be performed from the Volumes window or Volume detailed information window. For the steps to be taken from the Compute Nodes window or Compute Node detailed information window, see Allocating volumes to compute nodes (Compute Node).

Before you begin

  • Required role: Storage

Procedure

  1. In the Volumes window or the Volume detailed information window, allocate volumes by using one of the following methods:

    GUID-38655E28-8964-4CA5-A83D-9F1E4576666E-low.png

    • In the Volumes window, select the allocation-target volumes (1 to 1,000 volumes), and then click the preceding icon shown to the right of Select All.

    • Change the view mode of the Volumes window to Inventory view, and then click the preceding icon shown for each allocation-target volume.

    • In the Volume detailed information window for each allocation-target volume, click the preceding icon.

    The following dialog appears.

    GUID-5B70FDDD-BAA6-42F1-A2C8-26E067819BD2-low.png
  2. Select one or more compute nodes (100 nodes at maximum) to be allocated, enter the following as required, and then click Submit.

    • START LUN: LUN start number

      If you specify the LUN start number, unused LUN numbers are allocated in ascending order from the specified start number. If omitted, unused LUNs are allocated automatically in ascending order.

  3. When the following "Completed" message is displayed, processing is completed.

    • Successfully attached volumes to compute nodes.

Allocating volumes to compute nodes (Compute Node) (GUI)

Set paths (volume paths) between volumes and compute nodes.

The following describes the steps to be performed from the Compute Nodes window or Compute Node detailed information window. For the steps to be taken from the Volumes window or Volume detailed information window, see Allocating volumes to compute nodes (Volume).

Before you begin

  • Required role: Storage

Procedure

  1. In the Compute Nodes window or the Compute Node detailed information window, allocate volumes to compute nodes by using one of the following methods:

    GUID-6A2ADB06-A9F2-4069-AE13-7E96418A1A3A-low.png

    • In the Compute Nodes window, select the allocation-target compute nodes (1 to 100 nodes), click the preceding icon shown to the right of Select All, and then, from the menu that appears, select Attach Volumes.

    • Change the view mode of the Compute Nodes window to Inventory view, click the preceding icon shown for each allocation-target compute node, and then, from the menu that appears, select Attach Volumes.

    • In the Compute Node detailed information window for each allocation-target node, click the preceding icon, and then, from the menu that appears, select Attach Volumes.

    The following dialog appears.

    GUID-416F1676-2853-46EE-A5BA-801F1B9D7BE3-low.png
  2. Select one or more compute nodes (1,000 nodes at maximum) to be allocated, enter the following as required, and then click Submit.

    • START LUN: LUN start number

      If you specify the LUN start number, unused LUN numbers are allocated in ascending order from the specified start number. If omitted, unused LUNs are allocated automatically in ascending order.

  3. When the following "Completed" message is displayed, processing is completed.

    • Successfully attached volumes to compute nodes.

Canceling allocation of volumes to compute nodes (GUI)

Cancel allocation of the volumes to the compute nodes as follows. Before you start operation, verify that no I/O operation is being performed between the intended compute node and the intended volume.

Before you begin

  • Required role: Storage

Procedure

  1. In the Compute Nodes window or the Compute Node detailed information window, cancel allocation by using one of the following methods:

    GUID-CD711DB6-F2C7-410E-AA5A-953383A0BF20-low.png

    • In the Compute Nodes window, select cancellation-target compute nodes (1 to 100 nodes), and then click the preceding icon shown to the right of Select All.

      Note

      In the Compute Nodes window, if you select multiple cancellation-target compute nodes and click the icon, a timeout error message (shown below) might be displayed. However, because this is due to a transient problem that does not affect cancellation of allocation between volumes and compute nodes, you do not need to take any action.

      GUID-452A45E3-AFED-4CC6-98FB-52072DEF10C7-low.png
    • Change the view mode of the Compute Nodes window to Inventory view, and then click the preceding icon shown for each cancellation-target compute node.

    • In the Compute Node detailed information window for each cancellation-target node, select the volumes (1 to 1,000 volumes), and then click the preceding icon shown to the right of Select All.

    • In the Compute Node detailed information window for each cancellation-target node, in the list of volumes displayed in Inventory view, click the preceding icon shown for each cancellation-target volume.

    The following dialog appears.

    If you operate from the Compute Nodes window:

    GUID-F03F3560-F4B4-472B-919D-D17A5F2C7D04-low.png

    If you operate from the Compute Node detailed information window:

    GUID-54B10587-148E-406C-9DB8-4ECA9F71745B-low.png
  2. If you operate from the Compute Nodes window, select volumes (1,000 volumes at maximum), and then click Submit.

    If you operate from the Compute Node detailed window, click Submit.

  3. When the following "Completed" message is displayed, processing is completed.

    • Successfully detached volumes.

 

  • Was this article helpful?