Overview of global-active device
An overview of the global-active device feature helps you to understand its components and capabilities.
About global-active device
Global-active device (GAD) enables you to create and maintain synchronous, remote copies of data volumes.
A virtual storage machine is configured in the primary and secondary storage systems using the actual information of the primary storage system, and the global-active device primary and secondary volumes are assigned the same virtual LDEV number in the virtual storage machine. This enables the host to see the pair volumes as a single volume on a single storage system, and both volumes receive the same data from the host.
A quorum disk, which can be located in a third and external storage system or in an iSCSI-attached host server, is used to monitor the GAD pair volumes. The quorum disk acts as a heartbeat for the GAD pair, with both storage systems accessing the quorum disk to check on each other. A communication failure between systems results in a series of checks with the quorum disk to identify the problem for the system able to receive host updates.
Alternate path software on the host runs in the Active/Active configuration. While this configuration works well at campus distances, at metro distances Hitachi Dynamic Link Manager is required to support preferred/nonpreferred paths and ensure that the shortest path is used.
If the host cannot access the primary volume (P-VOL) or secondary volume (S-VOL), host I/O is redirected by the alternate path software to the appropriate volume without any impact to the host applications.
Global-active device provides the following benefits:
- Continuous server I/O when a failure prevents access to a data volume
- Server failover and failback without storage impact
- Load balancing through migration of virtual storage machines without storage impact

Global-active device solutions
Fault-tolerant storage infrastructure
If a failure prevents host access to a volume in a GAD pair, read and write I/O can continue to the pair volume in the other storage system to provide continuous server I/O to the data volume.

Failover clustering without storage impact
In a server-cluster configuration with global-active device, the cluster software is used to perform server failover and failback operations, and the global-active device pairs do not need to be suspended or resynchronized.

Server load balancing without storage impact
When the I/O load on a virtual storage machine at the primary site increases global-active device enables you to migrate the virtual machine to the paired server without performing any operations on the storage systems.

As shown in this example, the server virtualization function is used to migrate virtual machine VM3 from the primary-site server to the secondary-site server. Because the GAD primary and secondary volumes contain the same data, you do not need to migrate any data between the storage systems.

System configurations for GAD solutions
You have the option of implementing three different system configurations: a single-server configuration, a server-cluster configuration, and a cross-path configuration. The system configuration depends on the GAD solution that you are implementing.
The following table lists the GAD solutions and specifies the system configuration for each solution.
- GAD pairs both in the Mirrored status and in the Mirroring status are in the consistency group.
- GAD pairs both in the Mirrored status and in the Suspended status are in the consistency group.
When you use the cross-path configuration that enables both servers at the primary and secondary sites to access both volumes at the primary and secondary sites, the servers can continue to access the GAD volumes even in this situation. If you use a configuration other than the cross-path configuration, the servers cannot access the GAD volumes.
GAD solution |
Software |
System configuration | |
Alternate path software |
Cluster software | ||
Continuous server I/O (if a failure occurs in a storage system) |
Required |
Not required |
Single-server configuration |
Failover and failback on the servers without using the storage systems |
Not required |
Required |
Server-cluster configuration |
Migration of a virtual machine of a server without using the storage systems |
Not required |
Required |
Server-cluster configuration |
Both of the following:
|
Required |
Required |
Cross-path configuration |
In a single-server configuration, the primary and secondary storage systems connect to the host server at the primary site. If a failure occurs in one storage system, you can use alternate path software to switch server I/O to the other site.

In a server-cluster configuration, servers are located at both the primary and secondary sites. The primary storage system connects to the primary-site server, and the secondary storage system connects to the secondary-site server. The cluster software is used for failover and failback. When I/O on the virtual machine of one server increases, you can migrate the virtual machine to the paired server to balance the load.

In a cross-path configuration, primary-site and secondary-site servers are connected to both the primary and secondary storage systems. If a failure occurs in one storage system, alternate path software is used to switch server I/O to the paired site. The cluster software is used for failover and failback.

Global-active device and global storage virtualization
GAD operations are based on the global storage virtualization function. When virtual information is sent to the server in response to the SCSI Inquiry command, the server views multiple storage systems as multiple paths to a single storage system.
The global storage virtualization function is enabled when you install the license for Resource Partition Manager, which is provided with the Storage Virtualization Operating System (SVOS). For more information about Resource Partition Manager, see the Provisioning Guide for the storage system.
About the virtual ID
The server is able to identify multiple storage systems as a single virtual storage machine when the resources listed below are virtualized and the virtual identification (virtual ID) information is set. You can set virtual IDs on resource groups and on individual volumes, as described in the following table.
Virtual information required by the server |
Resource on which virtual IDs are set |
Serial number |
Resource group |
Product |
Resource group |
LDEV ID* |
Volume |
Emulation type |
Volume |
Number of concatenated LUs of LUN Expansion (LUSE) |
Volume |
SSID |
Volume |
* A volume whose virtual LDEV ID has been deleted cannot accept I/O from a server. The virtual LDEV ID is temporarily deleted on a volume to be used as a GAD S-VOL because, when the pair is created, the P-VOL's virtual LDEV ID is set as the S-VOL's virtual LDEV ID. |
When using global storage virtualization you can set the following:
- The same serial number or product as the virtual ID for more than one resource group
- Up to 15 virtual IDs for resource groups in a single storage system (VSP 5000 series)
- Up to 15 virtual IDs for resource groups in a single storage system (VSP E990)
- Up to seven types of virtual IDs for resource groups in a single storage system (VSP G/F350, G/F370, G/F700, G/F900)
- Virtual IDs for a maximum of 1,023 resource groups (excluding resource group #0)
- Virtual IDs for a maximum of 65,279 volumes
For instructions on setting virtual IDs, see the Command Control Interface Command Reference.
GAD status monitoring
GAD operations are managed based on the following information: Pair status, I/O mode of the P-VOL and S-VOL, and GAD status, which is a combination of pair status and I/O mode
GAD status
It is important to be able to understand what the meaning of a GAD status is and what that status tells you about the GAD pair.
The following table lists and describes the GAD statuses.
GAD status |
Description |
Data redundancy |
Updated volume |
Volume with latest data |
Simplex |
The volume is not a pair volume. |
No |
Not applicable |
Not applicable |
Mirroring |
The pair is changing to Mirrored status. This status is issued when you do the following:
|
No |
P-VOL and S-VOL |
P-VOL |
Mirrored |
The pair is operating normally. |
Yes |
P-VOL and S-VOL |
P-VOL and S-VOL |
Quorum disk blocked or no quorum disk volume |
Quorum disk is blocked, but the data is mirrored. Alternatively, the data is mirrored when no volume is set for the quorum disk. |
Yes |
P-VOL and S-VOL |
P-VOL and S-VOL |
Suspended |
The pair is suspended. I/O from the server is sent to the volume with the latest data. When a failure occurs or the pair is suspended, the status changes to Suspended. The status changes to Suspended after the time specified for Read Response Guaranteed Time When Quorum Monitoring Stopped elapses. |
No |
P-VOL or S-VOL |
P-VOL or S-VOL |
Blocked |
I/O is not accepted by either pair volume. This status occurs when:
If more than one failure occurs at the same time, the GAD status changes to Blocked. |
No |
None |
P-VOL and S-VOL |
GAD status transitions
The GAD status changes depending on the pair operation and failure.
The following illustration shows the GAD pair status transitions.

If you resynchronize a pair specifying the P-VOL, I/O continues on the P-VOL. If you resynchronize a pair specifying the S-VOL, data flow switches from the S-VOL to the P-VOL, and then I/O continues on the new P-VOL.
If you suspend a pair specifying the P-VOL, I/O continues to the P-VOL. If you suspend a pair specifying the S-VOL, I/O continues to the S-VOL.
Pair status
You should understand the meaning of the pair status to understand the current state of a global-active device pair.
The following table lists and describes the pair statuses, which indicate the current state of a global-active device pair. As shown in the following table, the pair status terms displayed by the user interfaces are slightly different.
Pair status |
Description | |
CCI |
HDvM - SN | |
SMPL |
SMPL |
The volume is not paired. |
COPY |
INIT/COPY |
The initial copy or pair resynchronization is in progress (including creation of a GAD pair that does not perform data copy). A quorum disk is being prepared. |
COPY |
The initial copy is in progress; data is being copied from the P-VOL to the S-VOL (including creation of a GAD pair that does not perform data copy). | |
PAIR |
PAIR |
The pair is synchronized. |
PSUS |
PSUS* |
The pair was suspended by the user. This status appears on the P-VOL. |
PSUE |
PSUE* |
The pair was suspended due to a failure. |
SSUS |
SSUS* |
The pair was suspended by the user, and update of the S-VOL is interrupted. This status appears on the S-VOL. |
SSWS |
SSWS* |
The pair was suspended either by the user or due to a failure, and update of the P-VOL is interrupted. This status appears on the S-VOL. |
* When a GAD pair is suspended, you can view the suspend type on the View Pair Properties window. |
GAD suspend types
When a GAD pair is suspended, the suspend type is displayed in the Status field of the View Pair Properties window. The suspend type is not displayed by CCI.
The following table lists and describes the GAD suspend types.
Suspend type | Volume | Description |
Primary Volume by Operator | P-VOL | The user suspended the pair from the primary storage system. The S-VOL suspend type is "by MCU". |
Secondary Volume by Operator | P-VOL S-VOL | The user suspended the pair from the secondary storage system. |
by MCU | S-VOL | The secondary storage system received a request from the primary storage system to suspend the pair. The P-VOL suspend type is Primary Volume by Operator or Secondary Volume by Operator. |
by RCU | P-VOL | The primary storage system detected an error condition at the secondary storage system, which caused the primary storage system to suspend the pair. The S-VOL suspend type is Secondary Volume Failure. |
Secondary Volume Failure | P-VOL S-VOL | The primary storage system detected an error during communication with the secondary storage system, or an I/O error during update copy. In this case, the S-VOL suspend type is usually Secondary Volume Failure. This suspend type is also used when the number of paths falls below the minimum number of paths setting on the Add Remote Connection window. |
MCU IMPL | P-VOL S-VOL | The primary storage system could not find valid control information in its nonvolatile memory during IMPL. This condition occurs only if the primary storage system is without power for more than 48 hours (that is, power failure and fully discharged backup batteries). |
Initial Copy Failed | P-VOL S-VOL | The pair was suspended before the initial copy operation was complete. The data on the S-VOL is not identical to the data on the P-VOL. |
I/O modes
You should understand the I/O actions on the P-VOL and the S-VOL of a GAD pair.
The following table lists and describes the GAD I/O modes. As shown in the following table, the I/O mode terms displayed by the user interfaces are slightly different.
I/O mode |
Read processing |
Write processing | ||
I/O mode |
CCI 1 |
HDvM - SN | ||
Mirror (RL) |
L/M |
Mirror (Read Local) |
Sends data from the storage system that received a read request to the server. |
Writes data to the P-VOL and then the S-VOL. |
Local |
L/L |
Local |
Sends data from the storage system that received a read request to the server. |
Writes data to the volume on the storage system that received a write request. |
Block2 |
B/B |
Block |
Rejected (Replies to illegal requests). |
Rejected (Replies to illegal requests). |
Notes:
|
Relationship between GAD status, pair status, and I/O mode
You should understand the relationship between the GAD status, pair status, and I/O mode to be informed about your GAD pairs.
The following table lists the GAD statuses and describes the relationship between the GAD status, pair status, and I/O mode. "N" indicates that pair status or I/O mode cannot be identified due to a failure in the storage system.
GAD status |
When to suspend |
P-VOL |
S-VOL |
Volume that has the latest data | ||
Pair status |
I/O mode |
Pair status |
I/O mode | |||
Simplex |
Not applicable |
SMPL |
Not applicable |
SMPL |
Not applicable |
Not applicable |
Mirroring |
Not applicable |
INIT |
Mirror(RL) |
INIT |
Block |
P-VOL |
Not applicable |
COPY |
Mirror(RL) |
COPY |
Block |
P-VOL | |
Mirrored |
Not applicable |
PAIR |
Mirror(RL) |
PAIR |
Mirror(RL) |
P-VOL and S-VOL |
Quorum disk blocked or no quorum disk volume |
Not applicable |
PAIR |
Mirror(RL) |
PAIR |
Mirror(RL) |
P-VOL and S-VOL |
Suspended |
Pair operation |
PSUS |
Local |
SSUS |
Block |
P-VOL |
Failure |
PSUE1 |
Local |
PSUE |
Block |
P-VOL | |
PSUE1 |
Local |
SMPL |
Not applicable |
P-VOL | ||
PSUE1 |
Local |
N |
N |
P-VOL | ||
Pair operation |
PSUS |
Block |
SSWS |
Local |
S-VOL | |
Failure |
PSUE |
Block |
SSWS1 |
Local |
S-VOL | |
SMPL |
Not applicable |
SSWS1 |
Local |
S-VOL | ||
N |
N |
SSWS1 |
Local |
S-VOL | ||
Blocked |
Not applicable |
PSUE |
Block |
PSUE |
Block |
P-VOL and S-VOL |
Not applicable |
PSUE |
Block |
N |
N |
P-VOL and S-VOL | |
Not applicable |
N |
N |
PSUE |
Block |
P-VOL and S-VOL | |
Notes: 1. If the server does not issue the write I/O, the pair status might be PAIR, depending on the failure location. |
Global-active device and server I/O
I/O requests from the server to a GAD pair volume are managed according to the volume's I/O mode. The GAD status determines the I/O mode of the P-VOL and S-VOL of a pair.
Server I/O (GAD status: Mirrored)
When the GAD status is Mirrored, the I/O mode of the P-VOL and S-VOL is Mirror (RL).
As shown in the following figure, a write request sent to a GAD volume is written to both pair volumes, and then a write-completed response is returned to the host.

Read requests are read from the volume connected to the server and then sent to the server. There is no communication between the primary and secondary storage systems.

Server I/O (GAD status: Mirroring or Quorum disk blocked or no quorum disk volume)
When the GAD status is Mirroring or Quorum disk blocked or no volume is set for the quorum disk, the I/O mode for the P-VOL is Mirror(RL), and the I/O mode for the S-VOL is Mirror(RL).
Write requests are written to both pair volumes and then the write-completed response is returned to the server.

Read requests are read by the P-VOL or S-VOL and then sent to the server.

Server I/O when the GAD status is Suspended
When the GAD status is Suspended, the I/O mode differs depending on where the latest data is.
When the GAD status is Suspended and the latest data is on the P-VOL, the I/O mode is as follows:
- P-VOL: Local
- S-VOL: Block
When the latest data is on the S-VOL, the I/O mode is as follows:
- P-VOL: Block
- S-VOL: Local
When the latest data is on the P-VOL, write requests are written to the P-VOL, and then the write-completed response is returned to the host, as shown in the following figure. The S-VOL's I/O mode is Block, so it does not accept I/O from the server, and the P-VOL's I/O mode is Local, so the data written to the P-VOL is not written to the S-VOL.

Read requests are read by the P-VOL and then sent to the host. There is no communication between the primary and secondary storage systems.

Server I/O when the GAD status is Blocked
When the GAD status is Blocked, the I/O mode of the P-VOL and S-VOL is Block. Neither volume accepts read/write processing.
Quorum disk and server I/O
The quorum disk is used to determine the storage system on which server I/O should continue when a path or storage system failure occurs.
The quorum disk is a volume virtualized from an external storage system. The primary and secondary storage systems check the quorum disk for the physical path statuses. Alternatively, a disk in an iSCSI-attached server can be used as a quorum disk if the server is supported by Universal Volume Manager.
When the primary and secondary storage systems cannot communicate, the storage systems take the following actions:

- The primary storage system cannot communicate over the data path and writes this status to the quorum disk.
- When the secondary storage system detects from the quorum disk that a path failure has occurred, it stops accepting read/write.
- The secondary storage system communicates to the quorum disk that it cannot accept read/write.
- When the primary storage system detects that the secondary storage system cannot accept read/write, the primary storage system suspends the pair. Read/write continues to the primary storage system.
If the primary storage system cannot detect from the quorum disk that the secondary storage system cannot accept I/O within five seconds of a communication stoppage, the primary storage system suspends the pair and I/O continues.
If both systems simultaneously write to the quorum disk that communication has stopped, this communication stoppage is considered to be written by the system that receives the first write I/O after the communication stoppage.
In addition, you can create the GAD pair without setting a volume in an external storage system as the quorum disk volume.
For details about the GAD configuration without a volume set for the quorum disk, see GAD pairs without a volume set for the quorum disk.
GAD pairs without a volume set for the quorum disk
For VSP G/F350, G/F370, G/F700, G/F900, and VSP E990, to use a quorum disk, an external storage system is required. Because of this, if you needed a GAD pair temporarily to migrate data, you needed to set up an external storage system. A quorum disk on an external storage system is no longer required for data migration using GAD. If you do not set a volume for the quorum disk, you can create GAD pairs without using an external storage system.
In this configuration, I/Os from the server might stop if a failure occurs in a path or a storage system. Therefore, determine whether to set a volume using an external volume for the quorum disk or not according to the requirements of your planned usage. The following figure illustrates the configuration without a volume set for the quorum disk.

The GAD configuration without a volume set for the quorum disk supports the following migration situations (VSP G/F350, G/F370, G/F700, G/F900).
- Using GAD to migrate data from VSP F1500, VSP G1x00 to VSP G/F350, G/F370, G/F700, G/F900
- Using GAD to migrate data from VSP G200, G400, G600, G800, and VSP F400, F600, F800 to VSP G/F350, G/F370, G/F700, G/F900
- Using GAD to migrate data from VSP G/F350, G/F370, G/F700, G/F900 to another VSP G/F350, G/F370, G/F700, G/F900
- Using GAD to migrate data from VSP 5000 series to VSP E990
- Using GAD to migrate data from VSP G200, G400, G600, G800, and VSP F400, F600, F800 to VSP 5000 series
- Using GAD to migrate data from VSP G200, G400, G600, G800, and VSP F400, F600, F800 to VSP E990
- Using GAD to migrate data from VSP G/F350, G/F370, G/F700, G/F900 to VSP E990
- Using GAD to migrate data from VSP E990 to another VSP E990
When you do not set a volume for a quorum disk, you do not need the following components and steps:
- External storage system
- External port between storage systems (A) and (B)
- Path and a switch between storage system (A) and the external storage system
- Path and a switch between storage system (B) and the external storage system
The following describes the differences between the configuration with a volume set for the quorum disk and the configuration without a volume set for the quorum disk.
Operation when a failure occurs As shown in the following figure and table, if a failure occurs in the primary storage system when an external storage system is not used for the quorum disk, the operation stops. Because this failure can occur, you should use the configuration that does not set volumes for the quorum disk temporarily for migrating data (VSP G/F350, G/F370, G/F700, G/F900, and VSP E990). If the operation is not allowed to stop due to a failure in the primary storage system, create a configuration with a volume set for the quorum disk (VSP 5000 series).
The figure shows failure points and the table describes whether the operation stops or not according to the location in which the failure occurred.

Number in the figure | Failure location | Operation | |
With volumes set for quorum disks | Without volumes set for quorum disks | ||
1 | Primary storage system | Continues | Stops |
2 | Secondary storage system | Continues | Continues |
3 | External storage system | Continues | Not applicable |
4 | Primary volume | Continues | Continues |
5 | Secondary volume | Continues | Continues |
6 | Quorum disk | Continues | Not applicable |
7 | Remote path from the primary storage system to the secondary storage system | Continues | Continues |
8 | Remote path from the secondary storage system to the primary storage system | Continues | Continues |
9 | Path between the primary storage system and the quorum disk | Continues | Not applicable |
10 | Path between the secondary storage system and the quorum disk | Continues | Not applicable |
11 | When the failures occur at the same time in the following locations:
| Continues | Continues |
12 | When the failures occur at the same time in the following locations:
| Stops | Not applicable |
Cost A second difference between the configuration with a volume set for the quorum disk and the configuration without a volume set for the quorum disk is cost. If you do not set a volume for the quorum disk, you can save the preparation cost because you do not need an external storage system and the path. Also, some steps for configuring the GAD environment are not necessary.
I/O stoppage detected in the counterpart system
When a stoppage is detected within 5 seconds in the counterpart system, the pair volume that will continue to receive read/write after the stoppage is determined based on the pair status.
- When the pair status is PAIR, read/write continues to the volume that wrote the communication stoppage to the quorum disk.
- When the pair status is INIT/COPY, read/write continues to the P-VOL. Read/write to the S-VOL remains stopped.
- When the pair status is PSUS, PSUE, SSWS, or SSUS, read/write continues to the volume whose I/O mode is Local. Read/write is stopped to the volume whose I/O mode is Block.
I/O stoppage not detected in the counterpart system
When a stoppage is not detected within 5 seconds in the counterpart system, the pair volume whose system wrote the communication stoppage to the quorum disk will continue to receive read/write after the stoppage.
Read/write processing depends on the pair status and I/O mode of the volume that did not detect the write as follows:
- When the pair status is PAIR, read/write continues.
- When the pair status is INIT/COPY, read/write continues to the P-VOL.
Read/write to the S-VOL remains stopped.
- When the pair status is PSUS, PSUE, SSWS, or SSUS, read/write continues to the volume whose I/O mode is Local.
Read/write is stopped to the volume whose I/O mode is Block. In addition, server I/O does not continue to the volume that should have notified the quorum disk, but did not, that it cannot accept I/O, because either a storage system failure occurred or the quorum disk is no longer accessible.
Server I/Os and data mirroring with blocked quorum disk or without quorum disk volumes
You should understand the server I/Os and data mirroring that occur when a failure occurs on the quorum disk or when no volume is set for the quorum disk.
GAD pairs that meet the following requirements can continue operation using the S-VOL if the P-VOL is blocked:
- The option for setting no LDEVs for the quorum disk is enabled when the quorum disk is created.
- The pair is created using the quorum ID assigned when the quorum disk is created.
If the quorum disk is blocked, GAD pairs can keep the same data in the P-VOL and S-VOL, but the operation stops if the P-VOL is blocked. To continue the operation, you must delete the GAD pair.
Server I/Os for GAD pairs and GAD pair data mirroring are as follows:
- When the quorum disk is blocked and the pair status is PAIR or when no volume is set for the quorum disk and the pair status is PAIR The primary and secondary storage systems communicate through remote paths. Because the P-VOL and S-VOL pair status and the I/O mode remain PAIR (Mirror(RL)), server I/Os continue in the P-VOL and the S-VOL. Data mirroring can be maintained through remote paths between the primary and secondary storage systems.
- When the quorum disk is blocked and the pair status is INIT/COPY Server I/Os continue in the P-VOL; however, that the pair might be suspended if the quorum disk is blocked immediately after the pair status changes to COPY.
- When no volume is set for the quorum disk and the pair status is INIT/COPY Server I/Os continue in the P-VOL.
- When the pair is suspended (pair status is PSUS, PSUE, SSWS, or SSUS) and the quorum disk is blocked or when the pair is suspended and no volume is set for the quorum disk Server I/Os continue in the volume of which I/O mode is Local. I/Os to the volume of which I/O mode is Block remains stopped, and data mirroring remains suspended.
- When the remote paths are disconnected after the quorum disk is blocked or when the remote paths are disconnected and no volume is set for the quorum disk After the quorum disk is blocked, the pair is suspended when the remote paths are disconnected or if no volume is set for the quorum disk. The P-VOL status and the I/O mode change to PSUE (Local), and the S-VOL status and the I/O mode change to PAIR (Block). Server I/Os continue in the P-VOL. The pair might be suspended and the status and the I/O mode of the P-VOL and the S-VOL might change to PSUE (Block) depending on the timing of the remote path disconnection after the quorum disk is blocked or if no volume is set for the quorum disk.
Before the pair status of the S-VOL and the I/O mode change to PAIR (Block), reading data might be delayed. If you want to minimize the delay, set a smaller value for Read Response Guaranteed Time When Quorum Monitoring Stopped. The time between the remote path disconnection and the pair suspension is also shortened.
When you want to restore the remote path quickly and do not want to suspend pairs immediately after the remote path is disconnected, set a larger value for Read Response Guaranteed Time When Quorum Monitoring Stopped. If you set a value larger than the server timeout time, a timeout might occur on the server.
The following table lists the recommended values for Read Response Guaranteed Time When Quorum Monitoring Stopped.
Setting value for Blocked Path Monitoring (sec) | Recommended setting value for Read Response Guaranteed Time When Quorum Monitoring Stopped |
40 (Default) |
40 (Default) |
2 to 5 |
5* |
6 to 25 |
6 to 25* |
26 to 44 |
26 to 44 |
45 |
45 |
* A GAD pair might be suspended if remote path communication is blocked temporarily due to an MP or path failure. To avoid this, a value which is greater than the RIO MIH time or at least 25 seconds must be set for Read Response Guaranteed Time When Quorum Monitoring Stopped. Note, however, that reading data might delay up to the time set for Read Response Guaranteed Time When Quorum Monitoring Stopped. |
Setting the same value as the blocked path monitoring for Read Response Guaranteed Time When Quorum Monitoring Stopped is recommended. Until the pair status and I/O mode of the S-VOL change to PSUE (Block), delay of reading data can be maintained within the seconds set for Read Response Guaranteed Time When Quorum Monitoring Stopped. Note that if a value equal to or less than 5 seconds is set for the blocked path monitoring, set 5 for Read Response Guaranteed Time When Quorum Monitoring Stopped.
If a value equal to or greater than 46 seconds is set for Read Response Guaranteed Time When Quorum Monitoring Stopped, GAD pair suspension caused by a remote path failure might be avoided. When you set a value of 46 or a greater, make sure that the application timeout setting for server I/Os is greater than this value. Also, make sure that multiple remote paths are set (at least four paths are recommended). Reading data might be delayed until the time set for Read Response Guaranteed Time When Quorum Monitoring Stopped elapses.
Quorum disk status
You need to check the status of the quorum disk before you replace the external storage system currently used by the quorum disk while you keep GAD pairs.
You can check the quorum disk status using the raidcom get quorum command. For details, see the Command Control Interface Command Reference.
You can replace the external storage system currently used by the quorum disk with a new external storage system while keeping GAD pairs.
There are five statuses for the quorum disk.
Quorum disk status | Display by CCI | Description |
Normal |
NORMAL |
The quorum disk is operating normally. |
Transitioning |
TRANSITIONING |
The status of the quorum disk is being changed. |
Blocked |
BLOCKED |
The quorum disk is blocked. |
Replacing |
REPLACING |
The quorum disk is being replaced. |
Failed |
FAILED |
The primary and secondary storage systems are connected to different quorum disks. Specify the external volume again, so that they can be connected to the same quorum disk, and reconfigure the quorum disk. |
- (Hyphen) |
- (Hyphen) |
No volume is set for the quorum disk. |
Recovering from Failed quorum disk status
You need to recover from a Failed status before you can replace the external storage system currently used by the quorum disk with a new external storage system while keeping GAD pairs.
When the status of the quorum disk is Failed, the primary storage system and the secondary storage system are connected to different quorum disks.
Procedure
Specify an external volume that allows both the primary and secondary storage systems to connect with the same quorum disk.
Initial copy and differential copy
There are two types of GAD copy operations that synchronize the data on the P-VOL and S-VOL of a pair, initial copy and differential copy.
For an initial copy operation, all data in the P-VOL is copied to the S-VOL, which ensures that the data in the two volumes is consistent. The initial copy is executed when the GAD status changes from Simplex to Mirrored.
Differential copy For a differential copy operation, only the differential data between the P-VOL and the S-VOL is copied. Differential copy is used when the GAD status changes from Suspended to Mirrored.
When a GAD pair is suspended, the storage systems record the update locations and manage the differential data. The following figure shows the differential copy operation for a pair in which the P-VOL received server I/O while the pair was suspended. If the S-VOL receives server I/O while a pair is suspended, the differential data is copied from the S-VOL to the P-VOL.
GAD consistency groups
You can manage multiple GAD pairs as a group by using consistency groups.
The GAD pairs in a GAD 3DC delta resync (GAD+UR) configuration must be registered to a consistency group.

Registering GAD pairs to consistency groups enables you to perform operations on all GAD pairs in a consistency group at the same time. In addition, when a failure occurs, the GAD pairs are suspended by consistency group (concurrent suspension).
For details about storage system support (microcode) for consistency groups, see Requirements and restrictions.
Operations on GAD pairs by consistency group
By registering multiple GAD pairs to a consistency group, you can resynchronize or suspend the GAD pairs by consistency group. You can resynchronize all GAD pairs registered to a consistency group by performing a single pair resynchronization operation. In addition, you can suspend all GAD pairs registered to a consistency group by performing a single pair suspension operation.

For details about storage system support (microcode) for consistency groups, see Requirements and restrictions.
Suspension of GAD pairs by consistency group
When a failure occurs, suspension of GAD pairs by consistency group guarantees data consistency among primary volumes if the I/O mode of a primary volume changes to Block, or among secondary volumes if the I/O mode of a secondary volume changes to Block.
If some GAD pairs in a consistency group are suspended due to a failure, all GAD pairs in the consistency group to which the suspended GAD pairs are registered change to the suspended state. This is called concurrent suspension.
- The volumes that have the most recent data are aggregated to a single storage system.
If a failure occurs in some pairs, and all GAD pairs registered to a consistency group are in the Suspended state, the volumes that have the most recent data are aggregated to the storage system at either the primary site or the secondary site.
- Data consistency is guaranteed before and after the suspension of the
GAD pairs.
If all GAD pairs registered to a consistency group are in the Suspended state, only the volumes (of either the primary or the secondary site) that have the most recent data will receive I/O from the server. The volumes of the other site will stop receiving I/O from the server (including I/O for volumes where no failure occurred). In addition, processing to write data will also stop. This ensures data consistency before and after the GAD pair suspension in the volumes that stopped receiving I/O from the server.

For example, a server issues write operations A to D. After the storage system receives write operation B, all GAD pairs registered to the consistency group change to the Suspended state because of an LDEV failure in the primary volume. In such a case, write operations A and B received before the GAD pairs changed to the Suspended state were completed for both the primary and secondary volume. Write operations C and D received after the GAD pairs changed to the Suspended state were completed only for the secondary volume.
Therefore, the volumes that have the most recent data are aggregated to the storage system at the secondary site.
For details about storage system support (microcode) for consistency groups, see Requirements and restrictions.
Use cases for consistency groups
You can use GAD consistency groups for many use cases, for example, batch failover or resuming operations by using consistent backup data.
Batch failover
By using consistency groups, you can perform a remote site batch failover operation for GAD pairs by consistency group.
When consistency groups are not used, a remote site failover operation is performed only for the applications that access the volume where the failure occurred.

When using consistency groups, if a failure occurs, you can perform a remote site failover operation for all applications that access the volume, together with the GAD pairs in the consistency group.

Resuming operations by using consistent backup data
A use case for consistency groups is when you want to resume operations by using consistent data for circumstances in which the most recent data is inaccessible.
If GAD pairs change to the Suspended state, I/O from servers continues only for the volume that has the most recent data. While GAD pairs are in the Suspended state, if a failure occurs in the storage system that has the most recent data, thus making it impossible to access the most recent data, you can resume operations from the point when GAD pair suspension started by using the consistent data (old data).
For example, assume that GAD pairs changed to the Suspended state due to a path failure of the primary volume. At this point, the primary volume has finished performing the write operation data up to data B.

Then, a failure occurred in the storage system of the secondary volume, making it impossible to access the most recent data in that volume. In such a case, after deleting the GAD pairs, you can resume the write processing for data C by using the primary volume.

GAD consistency group statuses
You can view the status of a consistency group by using Device Manager - Storage Navigator.
The following table describes the statuses of GAD consistency groups.
Status |
Description |
SMPL |
All volumes in the consistency group are not used as GAD pair volumes. |
INIT/COPY |
The initial copy or pair resynchronization of all GAD pairs in the consistency group is in progress (including creation of a GAD pair that does not perform data copy). A quorum disk is being prepared. |
COPY |
The initial copy of all GAD pairs in the consistency group is in progress; data is being copied from the P-VOL to the S-VOL (including creation of a GAD pair that does not perform data copy). |
PAIR |
All GAD pairs in the consistency group are synchronized, including pairs whose quorum disk is blocked. The data is duplicated. |
PSUS |
All GAD pairs in the consistency group were suspended by the user. This status appears when the volumes in the consistency group on the local storage system are P-VOLs. |
PSUE |
All GAD pairs in the consistency group were suspended due to a failure. |
SSUS |
All GAD pairs in the consistency group were suspended by the user, and update of the S-VOL is interrupted. This status appears when the volumes in the consistency group on the local storage system are S-VOLs. |
SSWS |
All GAD pairs in the consistency group were suspended either by the user or due to a failure, and update of the P-VOL is interrupted. This status appears when the volumes in the consistency group on the local storage system are S-VOLs. |
Suspending |
GAD pair suspension processing is being performed by consistency group. |
Resynchronizing |
GAD pair resynchronization processing is being performed by consistency group. |
Mixed |
More than one pair status exists in the consistency group. |
Unknown |
The consistency group status cannot be obtained. |
Blank |
The consistency group is not used. |
Global-active device components
A typical global-active device system consists of storage systems, paired volumes, a consistency group, a quorum disk, a virtual storage machine, paths and ports, alternate path software, and cluster software.
The following illustration shows the components of a typical global-active device system.

Both of the primary and secondary storage systems should be the same model type, but they do not have to be the same model. For example,
- If the primary storage system is a VSP 5000 series, the secondary storage system can be a VSP 5000 series.
- If the primary storage system is a VSP 5000 series, the secondary storage system can be a VSP E990, VSP G350, G370, G700, G900, VSP F350, F370, F700, F900, VSP G200, G400, G600, G800, and VSP F400, F600, F800.
- If the primary storage system is a VSP G350, G370, G700, G900 storage system, the secondary storage system can be a VSP G350, G370, G700, G900.
- If the primary storage system is a VSP E990, VSP F350, F370, F700, F900 storage system, the secondary storage system can be a VSP E990, VSP F350, F370, F700, F900.
- If the primary storage system is a VSP E990, the secondary storage system cannot be a VSP G1x00, VSP F1500.
An external storage system or iSCSI-attached server, which is connected to the primary and secondary storage systems using Universal Volume Manager, is required for the quorum disk.
A global-active device pair consists of a P-VOL in the primary storage system and an S-VOL in the secondary storage system. For model connectivity support requirements, see System requirements.
A consistency group consists of multiple global-active device pairs. By registering GAD pairs to a consistency group, you can resynchronize or suspend the GAD pairs by consistency group.
For details about storage system support (microcode) for consistency groups, see Requirements and restrictions.
The quorum disk, required for global-active device, is used to determine the storage system on which server I/O should continue when a storage system or path failure occurs. The quorum disk is virtualized from an external storage system that is connected to both the primary and secondary storage systems. Alternatively, a disk in an iSCSI-attached server can be used as a quorum disk if the server is supported by Universal Volume Manager. If you do not set a volume for the quorum disk, you do not need to prepare a volume in an external storage system for the quorum disk. For details, see GAD pairs without a volume set for the quorum disk.
A virtual storage machine (VSM) is configured in the secondary storage system with the same model and serial number as the (actual) primary storage system. The servers treat the virtual storage machine and the storage system at the primary site as one virtual storage machine.
You can create GAD pairs using volumes in virtual storage machines. When you want to create a GAD pair using volumes in VSMs, the VSM for the volume in the secondary site must have the same model and serial number as the VSM for the volume in the primary site.
GAD operations are carried out between hosts and primary and secondary storage systems that are connected by data paths composed of one of more physical links.
The data path, also referred to as the remote connection, connects ports on the primary storage system to ports on the secondary storage system. Both Fibre Channel and iSCSI remote copy connections are supported. The ports have attributes that enable them to send and receive data. One data path connection is required, but you should use two or more independent connections for hardware redundancy.
Alternate path software is used to set redundant paths from servers to volumes and to distribute host workload evenly across the data paths. Alternate path software is required for the single-server and cross-path GAD system configurations.
Cluster software is used to configure a system with multiple servers and to switch operations to another server when a server failure occurs. Cluster software is required when two servers are in a global-active device server-cluster system configuration.
User interfaces for global-active device operations
Global-active device operations are performed using the management software and the command-line interface (CLI) software for the storage system.
Hitachi Command Suite
The Hitachi Command Suite, HCS, software enables you to configure and manage GAD pairs and monitor and manage your global-active device environment.
- When one Device Manager server manages both global-active device storage systems, you can access all required functions for your GAD setup from the Set up Replication/GAD window in HCS. When any primary storage system and secondary storage system are managed in another instance of Device Manager, you can also configure GAD by using the Replication tab in HCS.
- When performing operations on GAD pairs during a failure, or when adding a ShadowImage or Thin Image pair volume to a GAD pair volume for additional data protection, you can perform the operation from the Replication tab in Device Manager.
- Hitachi Command Suite does not provide access to all global-active device operations. For example, the operation to forcibly delete a GAD pair can only be performed using Device Manager - Storage Navigator or CCI.
Command Control Interface
The Command Control Interface (CCI) command-line interface (CLI) software can be used to configure the global-active device environment, to create and manage GAD pairs, and to perform disaster recovery procedures.
Configuration workflow for global-active device
To start using GAD, you can perform either of the following methods:
- Use CCI and Device Manager - Storage Navigator. You can perform operations for storage systems that configure GAD.
- Use Hitachi Command Suite. You can follow instructions displayed on the screen to perform initial settings of multiple storage systems and components. If you manage storage systems at the primary site and the secondary site on a single Device Manager server, you can allocate the P-VOL and S-VOL to the host (excluding file servers) to create a pair by a single operation.
The following figure shows the workflow for configuring and starting GAD operations.

The following table lists the global-active device configuration tasks and indicates the location of the instructions for the tasks.
Configuration task |
Task performed on... |
CCI |
HCS | |
Installing global-active device |
Primary and secondary storage systems |
Not available for VSP F1500, VSP G1x00. For VSP 5000 series, VSP Fx00 models, VSP Gx00 models, see Creating the command devices. |
VSP 5000 series: Section on installing license keys using Device Manager - Storage Navigator in the Hitachi Command Suite User Guide | |
Creating command devices |
Primary and secondary storage systems | |||
Creating and executing CCI configuration definition files |
Server. (With HCS, this is the pair management server.) | |||
Connecting primary and secondary storage systems |
Adding remote connections |
Primary and secondary storage systems | ||
Creating the quorum disk |
Mapping the external volume |
Primary and secondary storage systems | ||
Setting the quorum disk |
Primary and secondary storage systems | |||
Setting up the secondary storage system |
Creating a VSM |
Secondary storage system | ||
Setting the GAD reserve attribute |
Secondary storage system | |||
Adding an LU path to the S-VOL |
Secondary storage system | |||
Updating CCI configuration definition files |
Server | |||
Creating GAD pair |
Primary storage system | |||
Adding alternate path to the S-VOL |
Server |
Section on optimizing HBA configurations in the Hitachi Command Suite User Guide |