Skip to main content
Hitachi Vantara Knowledge

Planning for global-active device

You can prepare your storage systems for global-active device by configuring the primary and secondary storage systems, data paths, pair volumes, and quorum disk.

Storage system preparation

Before you can use global-active device on your storage systems, you must ensure that system requirements, configurations, power and physical cabling, memory requirements, cache requirements, host modes, and system operation modes are configured appropriately.

To prepare the storage systems for global-active device operations:

  • Make sure that the primary, secondary, and external storage systems meet the global-active device system requirements described in chapter 2.
  • Make sure that the primary storage system is configured to report sense information to the host. The secondary storage system should also be attached to a host server to report sense information in the event of a problem with an S-VOL or the secondary storage system.
  • If power sequence control cables are used, set the power source selection switch for the cluster to "Local" to prevent the server from cutting the power supply to the primary storage system. In addition, make sure that the secondary storage system is not powered off during GAD operations.
  • Establish the physical paths between the primary and secondary storage systems. Switches and channel extenders can be used. For details, see Planning physical paths.
  • Review the shared memory requirements for the primary and secondary storage systems in Requirements and restrictions. Make sure that the cache in both storage systems works normally. Pairs cannot be created if cache requires maintenance.

    Configure the cache on the secondary storage system so that it can adequately support the remote copy workload and all local workload activity. When the cache memory and the shared memory in the storage system become redundant, you can remove them. For instructions on adding and removing cache and shared memory, see Adding and removing cache and shared memory.

    When determining the amount of cache required for GAD, consider the amount of Cache Residency Manager data (VSP G1x00 and VSP F1500) that will also be stored in cache.

  • Make sure that the appropriate host modes and host mode options (HMOs) are set. For details, see the Provisioning Guide for the storage system.
    • HMO 78, the nonpreferred path option, must be configured to specify nonpreferred paths for HDLM operations.
    • HMOs 49, 50, and 51 can be used to improve response time of host I/O for distance direct connections (up to 10 km Long Wave).
  • Make sure that the appropriate system option modes (SOMs) are set on your storage systems. For details about SOMs that apply to remote copy operations, contact customer support.

Adding and removing cache and shared memory

You can add cache or shared memory in a storage system in which GAD pairs already exist if additional memory is required. Likewise, you can remove cache or shared memory if it becomes redundant.

Configure cache memory so that it can be used for both the primary and secondary storage systems. Otherwise, creation of GAD pairs fails. Prepare cache memory for the secondary system to support both local and remote copy operations.

For VSP G1x00, VSP F1500, additional shared memory is required for both the primary and secondary storage systems.

For VSP G350, VSP G370, VSP G700, VSP G900, VSP F350, VSP F370, VSP F700, VSP F900, you can use GAD only with shared memory in the basic part. Adding shared memory expands the capacity for creating pairs.

Adding and removing cache memory

You can add cache memory if the size of your cache memory does not meet the requirements. You can remove cache memory if the size of your cache memory becomes redundant.

  1. Identify the status of the GAD volumes in the storage system.

  2. If a GAD volume is in the COPY status, wait until the status changes to PAIR, or suspend the GAD pair.

    Do not add or remove cache memory when any volumes are in the COPY status.

  3. When the status of all volumes has been confirmed, cache memory can be added to or removed from the storage system by your service representative. Contact customer support for adding or removing cache memory.

  4. After the addition or removal of cache memory is complete, resynchronize the pairs that you suspended in step 2.

Adding shared memory

You can add shared memory if the size of your shared memory does not meet the requirements.

  1. Identify the status of the GAD volumes in the storage system.

  2. If a GAD volume is in the COPY status, wait until the status changes to PAIR, or suspend the GAD pair.

    Do not add shared memory when any volumes are in the COPY status.

  3. When the status of all volumes has been confirmed, shared memory can be added to the storage system by your service representative. Contact customer support for adding shared memory.

  4. After the addition of shared memory is complete, resynchronize the pairs that you suspended in step 2.

Removing shared memory used in 64KLDEV Extension (VSP G1x00 and VSP F1500)

You can remove shared memory used in 64KLDEV Extension if it becomes redundant.

  1. Identify the status of all volumes with an LDEV ID of 0x4000 or higher.

  2. If a volume with an LDEV ID of 0x4000 or higher is used by a GAD pair, delete the GAD pair.

    Do not remove shared memory used in 64KLDEV Extension when any volume with an LDEV ID of 0x4000 or higher is used by a GAD pair.
  3. When the status of all volumes with an LDEV ID of 0x4000 or higher has been confirmed, shared memory can be removed from the storage system by your service representative. Contact customer support for removing shared memory.

Removing shared memory used in TC/UR/GAD (VSP G1x00 and VSP F1500)

You can remove shared memory used by TC/UR/GAD if shared memory is redundant.

Use the following workflow to remove shared memory used in TC/UR/GAD (VSP G1x00 and VSP F1500):

Procedure

  1. Identify the status of all volumes.

  2. If a volume is used by a TC/UR/GAD pair, delete the TC/UR/GAD pair.

    Do not remove shared memory used in TC/UR/GAD when any volume is used by a GAD pair.

  3. When the status of all volumes has been confirmed, shared memory can be removed from the storage system by your service representative. Contact customer support for adding or removing cache memory.

Removing shared memory (VSP Gx00 models, VSP Fx00 models)

You can remove shared memory if it is redundant.

  1. Identify the status of all volumes in the storage system.

  2. If a volume is used by a GAD pair, delete the GAD pair.

    If you use VSP G350, VSP G370, VSP G700, VSP G900, VSP F350, VSP F370, VSP F700, VSP F900, you can skip this step.

  3. Shared memory can be removed from the storage system by your service representative. Contact customer support for removing shared memory.

Planning system performance

Remote copy operations can affect the I/O performance of host servers and the primary and secondary storage systems. You can minimize the effects of remote copy operations and maximize efficiency and speed by changing your remote connection options and remote replica options.

Your Hitachi Vantara account team can help you analyze your workload and optimize copy operations. Using workload data (MB/s and IOPS), you can determine the appropriate amount of bandwidth, number of physical paths, and number of ports for your global-active device system. When these are properly determined and sized, the data path should operate free of bottlenecks under all workload levels.

Setting preferred and nonpreferred paths

You can improve overall system performance by setting a short-distance straight path as a preferred I/O path if an alternate path that connects a server and a storage system in a GAD configuration contains a short-distance straight path and a long-distance cross path.

By setting the short-distance straight path as a preferred I/O path, you can suppress I/Os to and from the inefficient long-distance cross path. As a result, overall system performance can be improved.

GUID-44F9BF34-6781-4A96-AFC5-30CEDD6E6C48-low.png

Setting preferred and nonpreferred paths using ALUA

When you perform Asymmetric Logical Unit Access (ALUA) in a cross-path configuration, you can specify the preferred path to use for issuing an I/O request from a server to a storage system.

To specify the preferred path, you must enable the ALUA mode in the storage system, and use the asymmetric access status setting to set the path to use as the preferred path. You might need to restart the server after you make these changes in the storage system for the server to recognize the changes.

Note If you add new LUNs, ensure that you set the ALUA attribute to that of existing LUNs. Otherwise, you will lose the settings on previously provisioned LUNs on the same host.

Setting preferred and nonpreferred paths using HDLM

You can use Hitachi Dynamic Link Manager (HDLM) to specify alternate paths to be used for normal global-active device operations by using host mode options.

Other paths are used when failures occur in the paths (including alternate paths) that should be used for normal operations. Host mode option (HMO) 78, the nonpreferred path option, must be configured to specify nonpreferred paths, which are used when failures occur.

For example, if servers and storage systems are connected in a cross-path configuration, I/O response is prolonged because the primary-site server is distant from the secondary storage system, and the secondary-site server is distant from the primary storage system. Normally in this case you use paths between the primary server and primary storage system and paths between the secondary server and secondary storage system. If a failure occurs in a path used in normal circumstances, you will use the paths between the primary server and secondary storage system, and paths between the secondary server and primary storage system.

GUID-7842944D-5747-4C35-AC49-50580F3006CA-low.png

When the settings are applied to HDLM, the attribute of the HDLM path to which HMO 78 was set changes to non-owner path. The attribute of the HDLM path to which HMO 78 was not set changes to owner path. For details, see the documents for HDLM version 8.0.1 or later.

Hitachi Dynamic Link Manager

Hitachi Dynamic Link Manager (HDLM) allows you to specify alternate paths to be used for normal global-active device operations. Other paths are used when failures occur in the paths (including alternate paths) that should be used for normal operations. Host mode option (HMO) 78, the nonpreferred path option, must be configured to specify nonpreferred paths, which are used when failures occur.

For example, if servers and storage systems are connected in a cross-path configuration, I/O response is prolonged because the primary-site server is distant from the secondary storage system, and the secondary-site server is distant from the primary storage system. Normally in this case you use paths between the primary server and primary storage system and paths between the secondary server and secondary storage system. If a failure occurs in a path used in normal circumstances, you will use the paths between the primary server and secondary storage system, and paths between the secondary server and primary storage system.

GUID-7842944D-5747-4C35-AC49-50580F3006CA-low.png

After you incorporate the storage system settings to HDLM, the attribute of the HDLM path to which HMO 78 was set changes to non-owner path. If HMO 78 is not set to the path, the HDLM path attribute changes to owner path.

Planning physical paths

When configuring physical paths to connect the storage systems at the primary and secondary sites, make sure that the paths can handle all of the data that could be transferred to the primary and secondary volumes under all circumstances.

When you plan physical paths, keep in mind the required bandwidth, Fibre Channel or iSCSI data path requirements, and whether you plan a direct connection, a connection using switches, or a connection using channel extenders.

NoteUse the same protocol for data paths between a host and a storage system and between primary and secondary storage systems. When different protocols are used in the data paths (for example, Fibre Channel data paths between the host and storage system and iSCSI data paths between the storage systems), make sure the timeout period for commands between the host and the storage system is equal to or greater than the timeout period for commands between the storage systems.

Determining the required bandwidth

You must have sufficient bandwidth to handle all data transfers at all workload levels. The amount of required bandwidth depends on the amount of server I/O to the primary volumes.

To identify the required bandwidth, you must first collect the write workload data under all workload conditions, including peak write workload, and then measure and analyze the data. You can use performance-monitoring software such as Hitachi Tuning Manager or Hitachi Performance Monitor to collect the workload data.

Fibre Channel connections

You can use Fibre Channel connections for direct connections, switch connections, and extender connections.

Use short-wave (optical multi-mode) or long-wave (optical single-mode) optical fiber cables to connect the storage systems at the primary and secondary sites. The required cables and network relay devices differ depending on the distance between the primary and secondary storage systems, as described in the following table.

GUID-677A1FC4-1BC9-48EA-AEF2-735D48044FA9-low.png

Distance between storage systems

Cable type

Network relay device

Up to 1.5 km

Short wave (optical multi-mode)

Switches are required if the distance is 0.5 to 1.5 km.

1.5 to 10 km

Long wave (optical single-mode)*

Not required.

10 to 30 km

Long wave (optical single-mode)*

Switches must be used.

30 km or longer

Communication line

An authorized third-party channel extender is required.

* Long wave cannot be used for FCoE (VSP G1x00 and VSP F1500).

No special settings are required for the storage system if switches are used in a Fibre Channel environment.

Long-wave (optical single-mode) cables can be used for direct connection at a maximum distance of 10 km. The maximum distance that might result in the best performance differs depending on the link speed, as shown in the following table. For details about the availability of serial-channel GAD connections, contact customer support.

Link speed

Maximum distance for best performance

1 Gbps (VSP G1x00 and VSP F1500)

10 km

2 Gbps (VSP G1x00 and VSP F1500)

6 km

4 Gbps

3 km

8 Gbps

2 km

16 Gbps

1 km

32 Gbps (VSP Gx00 models)

0.6 km

iSCSI data path requirements

You can use iSCSI connections for direct connections, switch connections, and extender connections.

The following table lists the requirements and cautions for systems using iSCSI data paths. For details about the iSCSI interface, see the Provisioning Guide.

Item

Requirement

Remote paths

Add only remote paths of the same protocol to a single path group. Make sure that Fibre Channel and iSCSI remote paths are not mixed in a path group.

Physical paths

  • Before replacing Fibre Channel or iSCSI physical paths, remove the GAD pair and the remote path that are using the physical path to be replaced.
  • Using the same protocol in the physical path between the host and a storage system, or between storage systems is recommended.

    As in the example below, if protocols are mixed, set the same or a greater command timeout value between the host and a storage system than between storage systems.

    Example:

    - Physical path between the host and a storage system: Fibre Channel

    - Physical path between storage systems: iSCSI

Ports

  • When the parameter settings of an iSCSI port are changed, the iSCSI connection is temporarily disconnected and then reconnected. To minimize the impact on the system, change the parameter settings when the I/O load is low.
  • If you change the settings of an iSCSI port connected to the host, a log might be output on the host, but this does not indicate a problem. In a system that monitors system logs, an alert might be output. If an alert is output, change the iSCSI port settings, and then check if the host is reconnected.
  • When you use an iSCSI interface between storage systems, disable Delayed ACK in the Edit Ports window. By default, Delayed ACK is enabled.

    If Delayed ACK is enabled, it might take time for the host to recognize the volume used by a GAD pair. For example, when the number of volumes is 2,048, it takes up to 8 minutes.

  • Do not change the default setting (enabled) of Selective ACK for ports.
  • In an environment in which a delay occurs in a line between storage systems, such as long-distance connections, you must set an optimal window size of iSCSI ports in storage systems at the primary and secondary sites after verifying various sizes. The maximum value you can set is 1,024 KB. The default window size is 64 KB, so you must change this setting.
  • iSCSI ports do not support the fragmentation (splitting packets) functionality. When the value for the maximum transfer unit (MTU) of a switch is smaller than the MTU value of the iSCSI port, packets are lost, and communication might not be performed correctly. The MTU value for the iSCSI port must be greater than 1500. Set the same MTU value (or greater) for the switch as the iSCSI port. For more information about the MTU setting and value, see the switch manual.

    In a WAN environment in which the MTU value is smaller than 1500, fragmented data cannot be sent or received. In this environment, set a smaller value for the maximum segment size (MSS) of the WAN router according to the WAN environment, and then connect the iSCSI port. Alternatively, use iSCSI in an environment in which the MTU value is 1500 or greater.

  • When using a remote path on the iSCSI port for which virtual port mode is enabled, use the information about the iSCSI port that has virtual port ID (0). You cannot use virtual port IDs other than 0 as a virtual port.
  • On the VSP Gx00 models and VSP Fx00 models, a port can be used for connections to the host (target attribute) and to a storage system (initiator attribute). However, to minimize the impact on the system if a failure occurs either in the host or in a storage system, you should connect the port for the host and for the storage system to separate CHBs.

Network setting

  • Disable the spanning tree setting for a port on a switch connected to an iSCSI port. If the spanning tree function is enabled on a switch, packets do not loop through a network when the link is up or down. When this happens, packets might be blocked for about 30 seconds. If you need to enable the spanning tree setting, enable the Port Fast function of the switch.
  • In a network path between storage systems, if you use a line that has a slower transfer speed than the iSCSI port, packets are lost, and the line quality is degraded. Configure the system so that the transfer speed for the iSCSI ports and the lines is the same.
  • Delays in lines between storage systems vary depending on system environments. Validate the system to check the optimal window size of the iSCSI ports in advance. If the impact of the line delay is major, consider using devices for optimizing or accelerating the WAN.
  • When iSCSI is used, packets are sent or received using TCP/IP. Because of this, the amount of packets might exceed the capacity of a communication line, or packets might be resent. As a result, performance might be greatly affected. Use Fibre Channel data paths for critical systems that require high performance.

Connection types

Three types of connections are supported for GAD physical paths: direct, switch, and channel extenders.

You can use Hitachi Command Suite or CCI to configure ports and topologies.

Establish bidirectional physical path connections from the primary to the secondary storage system and from the secondary to the primary storage system.

Direct connection

You can connect two storage systems directly to each other.

GUID-906964B9-2AAD-40CA-8D28-64FAD1345951-low.png

You can use the following host mode options (HMOs) to improve response time of host I/O by improving response time between the storage systems for distance direct connections (up to 10 km Long Wave) when the open package is used.

  • HMO 49 (BB Credit Set Up Option1) (VSP G1x00 and VSP F1500)
  • HMO 50 (BB Credit Set Up Option2) (VSP G1x00 and VSP F1500)
  • HMO 51 (Round Trip Set Up Option)
NoteIf you use iSCSI, the HMO settings become invalid.

For more information about HMOs, see the Provisioning Guide for your storage system.

The fabric and topology settings depend on the settings of packages, the protocol used for the connection between the storage systems, and the setting of HMO 51. The link speed that can be specified differs for each condition.

Package name

Protocol

HMO 51 setting

Fabric setting

Topology: remote replication ports

Link speed that can be specified

16FC8 (VSP G1x00 and VSP F1500)

8 Gbps FC

OFF

OFF

FC-AL

  • 2 Gbps
  • 4 Gbps
  • 8 Gbps

8 Gbps FC

ON

OFF

Point-to-Point

  • 2 Gbps
  • 4 Gbps
  • 8 Gbps

8 Gbps FC

OFF

OFF

Point-to-Point

  • 2 Gbps
  • 4 Gbps
  • 8 Gbps

16FC16 (VSP G1x00 and VSP F1500)

16 Gbps FC

OFF

OFF

FC-AL

  • 4 Gbps
  • 8 Gbps

16 Gbps FC

ON

OFF

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

OFF

OFF

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

8FC16 (VSP G1x00 and VSP F1500)

16 Gbps FC

OFF

OFF

FC-AL

  • 4 Gbps
  • 8 Gbps

16 Gbps FC

ON

OFF

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

OFF

OFF

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

8IS10 (VSP G1x00 and VSP F1500)

10 Gbps iSCSI

N/A

N/A

N/A

10 Gbps

CHB(FC32G) (VSP G350, VSP G370, VSP G700, VSP G900, VSP F350, VSP F370, VSP F700, VSP F900)

32GbpsFC

OFF

OFF

FC-AL

  • 4 Gbps
  • 8 Gbps

ON

OFF

FC-AL

  • 4 Gbps
  • 8 Gbps

OFF

OFF

Point-to-Point

  • 16 Gbps
  • 32 Gbps

OFF

ON

Point-to-Point

  • 16 Gbps
  • 32 Gbps

Connection using switches

You can use host mode options to improve response times when switches are used for distance connections.

NoteYou do not need to set the port attributes (Initiator, RCU Target, Target) on VSP Gx00 models and VSP Fx00 models.
GUID-7A4C87D6-6BE9-47AE-B06F-78A587BFD97F-low.png

Switches from some vendors (for example, McData ED5000) require F_port.

You can use the following host mode options (HMOs) to improve response time of host I/O by improving response time between the storage systems when switches are used for distance connections (up to approximately 500 km with a round-trip response of 20 ms or less) and the open package is used.

  • HMO 49 (BB Credit Set Up Option1) (VSP G1x00 and VSP F1500)
  • HMO 50 (BB Credit Set Up Option2) (VSP G1x00 and VSP F1500)
  • HMO 51 (Round Trip Set Up Option)

For details about HMOs, see the Provisioning Guide for the storage system.

The fabric and topology settings depend on the settings of packages, and protocol used for the connection between storage systems, and the HMO 51 setting. The link speed that can be specified differs on each condition.

Package name

Protocol

HMO 51 setting

Fabric setting

Topology: Initiator and RCU Target

Link speed that can be specified

16FC8 (VSP G1x00 and VSP F1500)

8 Gbps FC

OFF

ON

Point-to-Point

  • 2 Gbps
  • 4 Gbps
  • 8 Gbps

8 Gbps FC

ON

ON

Point-to-Point

  • 2 Gbps
  • 4 Gbps
  • 8 Gbps

8 Gbps FC

OFF

OFF

Point-to-Point

  • 2 Gbps
  • 4 Gbps
  • 8 Gbps

8FC16 (VSP G1x00 and VSP F1500)

16 Gbps FC

OFF

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

16 Gbps FC

ON

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

16 Gbps FC

OFF

OFF

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

16FC16 (VSP G1x00 and VSP F1500)

16 Gbps FC

OFF

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

16 Gbps FC

ON

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

16 Gbps FC

OFF

OFF

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps

16FE10 (VSP G1x00 and VSP F1500)

10 Gbps FCoE

OFF

ON

Point-to-Point

10 Gbps

10 Gbps FCoE

ON

ON

Point-to-Point

10 Gbps

8IS10 (VSP G1x00 and VSP F1500)

10 Gbps iSCSI

N/A

N/A

N/A

10 Gbps

CHB(FC32G) VSP G350, VSP G370, VSP G700, VSP G900, VSP F350, VSP F370, VSP F700, VSP F900

32GbpsFC

OFF

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps
  • 32 Gbps

ON

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps
  • 32 Gbps

* 4HF32R (4 ports, FC 32 Gbps Ready Package) supports multiple transfer speed protocol. Depending on the mounted SFP parts, you can use either 16 Gbps or 32 Gbps protocol.

Connection using channel extenders

You should use channel extenders and switches for long-distance connections (up to 500 km and the round trip time is 20 ms or less).

Set Fabric to ON and topology to Point-to-Point for the remote replication ports (Initiator and RCU Target).

NoteYou do not need to set the port attributes (Initiator, RCU Target, Target) on the VSP Gx00 models and VSP Fx00 models.
GUID-DFA1EDB5-931E-4051-A8A6-B09E06F45E2F-low.png
Note
  • When the primary and secondary storage systems are connected using switches with a channel extender, and multiple data paths are configured, the capacity of data to be transmitted might concentrate on particular switches, depending on the configuration and the settings of switch routing. Contact customer support for more information.
  • Make sure that your channel extenders can support remote I/O. For details, contact customer support.
  • Create at least two independent physical paths (one per cluster) between the primary and secondary storage systems for hardware redundancy for this critical element.
  • If you plan to use more than 4,000 pairs, when creating pairs you should restrict the number of pairs to 4,000 or less per physical path to distribute the load across multiple physical paths.

Planning ports (VSP G1x00 and VSP F1500)

Data is transferred from Initiator ports in one storage system to RCU Target ports in the other system. After identifying the peak write workload, which is the amount of data transferred during peak periods, you can determine the amount of bandwidth and the number of Initiator and RCU Target ports required.

The following describes the port attributes that you must set on the VSP G1x00 and VSP F1500.

  • Initiator ports: Send remote copy commands and data to the RCU Target ports on a connected storage system. One Initiator port can be connected to a maximum of 64 RCU Target ports.
    CautionDo not add or delete a remote connection or add a remote path at the same time that the SCSI path definition function is in use.
  • RCU Target ports: Receive remote copy commands and data from the Initiator ports on a connected storage system. One RCU Target port can be connected to a maximum of 16 Initiator ports.

    The number of remote paths that can be specified does not depend on the number of ports. The number of remote paths can be specified for each remote connection.

  • Target ports: Connect the storage system to the host servers. When a server issues a write request, the request is sent from a Target port on the storage system to a VSP G1x00 and VSP F1500 volume.
  • External ports: Connect the storage system to external storage systems or iSCSI-attached servers configured using Universal Volume Manager. The external storage system or iSCSI-attached server for the GAD quorum disk is connected to an external port on the primary and secondary storage systems.

Fibre Channel used as remote paths

Before configuring a system using Fibre Channel, there are restrictions that you need to consider.

For details about Fibre Channel, see the Provisioning Guide for your system.

  • When you use Fibre Channel as a remote path, if you specify Auto for Port Speed, specify 10 seconds or more for Blocked Path Monitoring. If you want to specify 9 seconds or less, do not set Auto for Port Speed.
  • If the time specified for Blocked Path Monitoring is not long enough, the network speed might be slowed down or the period for speed negotiation might be exceeded. As a result, paths might be blocked.

Planning the quorum disk

If you use an external storage system, it must be prepared for the GAD quorum disk. If you use a disk in a server as the quorum disk, you do not need to prepare the external storage system for the quorum disk.

Installation of the external storage system

Where you install the external storage system depends on the number of sites in your configuration.

In a three-site configuration, you install the external storage system in a third site away from the primary and secondary sites. I/O from servers continues if any failure occurs at the primary site, the secondary site, or the site where the external storage system is installed.GUID-26570CDF-3FEB-49E9-A675-EBC9119CAC56-low.png

In a two-site configuration, you install the external storage system at the primary site. If failure occurs at the secondary site, I/O from servers will continue. However, if a failure occurs at the primary site, I/O from servers will stop.GUID-08D54380-DAD8-456D-B082-34E2AA5CCED7-low.png

At the secondary site, you cannot install any external storage system for quorum disks.

NoteWhen you use iSCSI in the remote paths between the primary storage system and the external storage system for the quorum disk or between the secondary storage system and the external storage system for the quorum disk, the quorum disk blockade might occur due to one remote path failure.

Relationship between the quorum disk and number of remote connections

When you use multiple remote connections, you should prepare as many quorum disks as remote connections to avoid the possibility of a single remote connection failure causing the suspension of the GAD pairs that are using the other normal remote connections.

Simultaneously, you must make a combination of one quorum disk, one remote connection from the primary storage system to the secondary storage system, and one remote connection from the secondary storage system to the primary storage system.

GUID-E2F2486C-CF0E-4023-B3F6-1E0DCFA5D4C6-low.png
TipIf you are planning to manage many GAD pairs using one quorum disk, if more than 8 physical paths are necessary for the remote connection, you can configure the system with one quorum disk for two or more remote connections.

When all paths used in the remote connection are blocked, the GAD pairs will be suspended in units of quorum disks. In the configuration shown below, the GAD pairs that are using remote connection 1 will be suspended even if the failure occurred at remote connection 2. Also, when a failure occurs at the path from the volume at the primary site or the secondary site to the quorum disk, the GAD pairs that are using the same quorum disk will be suspended.

GUID-8A50911A-2E49-4DD3-99D3-B38F41B5E7C7-low.png

Suspended pairs depending on failure location (quorum disk not shared)

When the same number of quorum disks as the remote connections are used, only GAD pair that uses the failed remote connection, a quorum disk or a path to the quorum disk, is suspended.

The GAD pair that uses the normal remote connection, quorum disk and path to the quorum disk, can keep the status being mirrored. The following figure shows the relationship between the failure locations and the GAD pair suspended by the failure.

GUID-B1338142-C800-42A2-9979-CD6DEE8B2084-low.png

#

Failure locations

GAD pair 1

GAD pair 2

1

Remote connection 1 from the primary site to the secondary site

Suspended

Not suspended

2

Remote connection 1 from the secondary site to the primary site

Suspended

Not suspended

3

Remote connection 2 from the primary site to the secondary site

Not suspended

Suspended

4

Remote connection 2 from the secondary site to the primary site

Not suspended

Suspended

5

Path to the quorum disk 1

Not suspended*

Not suspended

6

Quorum disk 1

Not suspended*

Not suspended

7

Path to the quorum disk 2

Not suspended

Not suspended*

8

Quorum disk 2

Not suspended

Not suspended*

* The GAD pair is not suspended, but I/O mode of the S-VOL changes to Block for pairs created, resynchronized, or swap resynchronized on 80-04-2x or earlier for VSP G1x00 and VSP F1500 and 83-03-3x or earlier for VSP Gx00 models).

Suspended pairs depending on failure location (quorum disk shared)

When a quorum disk is shared by more than one connections, all GAD pairs which share a quorum disk are suspended, regardless of the failure locations, as shown below.

GUID-7E63FF8E-8A67-4143-B0E7-C57DA79D416D-low.png

#

Failure locations

GAD pair 1

GAD pair 2

1

Remote connection 1 from the primary site to the secondary site

Suspended

Suspended

2

Remote connection 1 from the secondary site to the primary site

Suspended

Suspended

3

Remote connection 2 from the primary site to the secondary site

Suspended

Suspended

4

Remote connection 2 from the secondary site to the primary site

Suspended

Suspended

5

Path to the quorum disk 1

Not suspended*

Not suspended*

6

Quorum disk 1

Not suspended*

Not suspended*

* The GAD pair is not suspended, but I/O mode of the S-VOL changes to Block for pairs created, resynchronized, or swap resynchronized on 80-04-2x or earlier for VSP G1x00 and VSP F1500 and 83-03-3x or earlier for VSP Gx00 models).

Relationship between quorum disks and consistency groups

A single quorum disk can be shared by multiple consistency groups.

When creating GAD pairs to be registered to different consistency groups, you can specify the same quorum disk ID.

GUID-1BF7D4EF-B0D1-488A-AEDA-97D5FBCB187A-low.png

Pairs registered to the same consistency group must use the same quorum disk. When creating pairs in a single consistency group, you cannot specify multiple quorum disk IDs.

GUID-BCAC8847-FD5D-4F63-A9E6-0A07ACE188FF-low.png

Response time from the external storage system

You should monitor the response time of the quorum disks regularly using Performance Monitor on the primary or secondary storage system to detect possible issues.

If the response time from the external storage system for quorum disks is delayed for more than one second, GAD pairs might be suspended by some failures. Specify External storage Logical device Response time (ms) on the monitoring objects. If the response time exceeds 100 ms, review the configuration and consider the following actions:

  • Lower the I/O load, if the I/O load of volumes other than the quorum disk is high in the external storage system.
  • Remove the causes of the high cache load, if the cache load is high in the external storage system.
  • Lower the I/O load of the entire external storage system, when you perform maintenance of the external storage system. Alternatively, perform maintenance on the external storage system with settings that will minimize the impact to the I/O, referring to the documentation for the external storage system.

Cache pending rate of the CLPR to which the quorum disk is assigned

If the write-pending rate of the CLPR to which the quorum disk (external volume) on the primary or secondary storage systems is assigned is high, the I/O performance of the GAD pair volumes might decrease or the GAD pairs might be suspended by some failure.

To address this situation:

  1. Use Performance Monitor on the primary or secondary storage system to perform regular monitoring of the write-pending rate of the CLPR to which the quorum disks are assigned (specify Cache Write Pending Rate (%) on the monitoring objects). For details, see the Performance Guide for the storage system.
  2. If the write-pending rate exceeds 70%, review your configuration and consider the following actions:
    • Lower the I/O load in the storage system.
    • If the cache load is high:

      - Lower the I/O load.

      - Migrate the quorum disk to a CLPR for which the cache load is low.

      - Add cache memory to increase the cache capacity of the storage system.

    • The cache pending rate might exceed 70% temporarily due to failures on the primary and secondary storage systems. To prevent the I/O performance of the GAD pair volumes from decreasing or the GAD pairs from being suspended by failures related to this situation, the write-pending rate should be below 35% under normal conditions.

Planning GAD pairs and pair volumes

This section describes planning for differential data management, calculating the maximum number of GAD pairs, and the requirements for primary and secondary volumes related to the GAD configuration.

Differential data

Differential data is managed by the bitmap in units of tracks. A track that receives a write command while the pair is split is managed as differential data in the bitmap. When the pair is resynchronized, the differential data is copied to the S-VOL in units of tracks.

When a GAD pair contains a DP-VOL that is larger than 4,194,304 MB (8,589,934,592 blocks), the differential data is managed by the pool to which the GAD pair volume is related.

In this case, additional pool capacity (up to 4 pages, depending on the software configuration) is required for each increase of user data size by 4,123,168,604,160 bytes (~4 TB). For a GAD pair with a DP-VOL that is larger than 4,194,304 MB (8,589,934,592 blocks), data management might fail due to insufficient pool capacity. If this occurs, all of the P-VOL data (all tracks) is copied to the S-VOL when the pair is resynchronized.

For instructions on releasing the differential data (pages) managed in a pool, see Releasing the differential data managed in a pool.

Maximum number of GAD pairs

The maximum number of GAD pairs per storage system is specified in Requirements and restrictions. The maximum number of pairs per storage system is subject to restrictions, such as the number of cylinders used in volumes or the number of bitmap areas used in volumes.

When you create all pairs with DP-VOLs or external volumes, the maximum number of pairs is calculated by subtracting the number of quorum disks (at least one) from the maximum number of virtual volumes that can be defined in a storage system (total number of DP-VOLs plus external volumes: 63,231 for VSP G1x00 and VSP F1500).

In the calculation formulas below, "ceiling" is the function that rounds up the value inside the parentheses to the next integer. "Floor" is the function that rounds down the value inside the parentheses to the next integer.

NoteIf the volume size is larger than 4,194,304 MB (8,589,934,592 blocks), bitmap area is not used. Therefore, the calculation for the bitmap areas is not necessary when creating GAD pairs with DP-VOLs that are larger than 4,194,304 MB (8,589,934,592 blocks).

Calculating the number of cylinders

To calculate the number of cylinders, start by calculating the number of logical blocks, which indicates volume capacity measured in blocks.

number-of-logical-blocks = volume-capacity-in-bytes / 512

Then use the following formula to calculate the number of cylinders:

number-of-cylinders = ceiling(ceiling(number-of-logical-blocks / 512) / 15)

Calculating the number of bitmap areas

Calculate the number of bitmap areas using the number of cylinders.

number-of-bitmap-areas = ceiling((number-of-cylinders × 15) / 122,752)

122,752 is the differential quantity per bitmap area. The unit is bits.

NoteYou must calculate the number of required bitmap areas for each volume. If you calculate the total number of cylinders in multiple volumes and then use this number to calculate the number of required bitmap areas, the calculation results might be incorrect.

The following are examples of correct and incorrect calculations, assuming that one volume has 10,017 cylinders and another volume has 32,760 cylinders.

  • Correct:

    ceiling((10,017 × 15) / 122,752) = 2

    ceiling((32,760 × 15) / 122,752) = 5

    The calculation result is seven bitmap areas in total.

  • Incorrect:

    10,017 + 32,760 = 42,777 cylinders

    ceiling((42,777 × 15) / 122,752) = 6

    The calculation result is six bitmap areas in total.

Calculating the number of available bitmap areas

The total number of bitmap areas available in the storage system is:

  • VSP G200, VSP G350, VSP F350: 36,000
  • VSP G400, VSP G600, VSP G800, VSP F400, F600, F800, VSP G370, VSP G700, VSP G900, VSP F370, VSP F700, VSP F900, VSP G1000, VSP G1500, VSP F1500: 65,536

The number of bitmap areas is shared by TrueCopy, TrueCopy for Mainframe, Universal Replicator, Universal Replicator for Mainframe, and GAD. If you use these software products, subtract the number of bitmap areas required for these products from the total number of bitmap areas in the storage system, and then use the formula in the next section to calculate the maximum number of GAD pairs. For details about calculating the number of bitmap areas required for the other software products, see the appropriate user guide.

Calculating the maximum number of pairs

Use the following values to calculate the maximum number of pairs:

  • The number of bitmap areas required for pair creation.
  • The total number of available bitmap areas in the storage system, or the number of available bitmap areas calculated in Calculating the number of available bitmap areas.

    Calculate the maximum number of pairs using the following formula with the total number of bitmap areas in the storage system (or the number of available bitmap areas) and the number of required bitmap areas, as follows:

    maximum-number-of-pairs-that-can-be-created = floor(total-number-of-bitmap-areas-in-storage-system / number-of-required-bitmap-areas)

Calculate the maximum number of pairs using the already calculated necessary number of bitmap areas, and the number of bitmap areas in storage systems listed in the following table. The number of bitmap areas in a storage system is determined by the availability of shared memory extended for GAD and the storage system model.

Extension status of shared memory for GAD

Number of bitmap areas in storage systems

Base (no extension)

Varies depending on the model:

  • VSP G1x00, VSP F1500: 0
  • VSP G350, VSP F350: 3,712
  • VSP G370, VSP G700, VSP F370, VSP F700: 36,000
  • VSP G900, VSP F900: 65,536

With extension

Varies depending on the model:

  • VSP G350, VSP F350: 36,000
  • VSP G370, VSP G700, VSP G900, VSP F370, VSP F700, VSP F900, VSP G1x00, VSP F1500: 65,536

S-VOL resource group and storage system: same serial number and model

You can create GAD pairs specifying a volume in a resource group that has the same serial number and model as the storage system for the S-VOL.

In this case, you must specify a volume in the resource group (virtual storage machine) whose serial number and model are same as the secondary storage system for the P-VOL.

When you create GAD pairs, the virtual LDEV ID of the P-VOL is copied to the virtual LDEV ID of the S-VOL. In the following figure, the copied virtual LDEV ID of the P-VOL is equal to the original virtual LDEV ID of the S-VOL. The volume in a resource group that has the same serial number and the same model as the storage system and whose original LDEV ID is equal to the virtual LDEV ID will be treated as a normal volume, not as a virtualized volume by the global storage virtualization function.

GUID-CD935D4A-D2B5-44FC-99CE-64A77124B04D-low.png

When virtual information is copied from the P-VOL to the S-VOL and a normal volume requirement is not met, you cannot create GAD pairs. For example, when the copied virtual emulation type of the P-VOL is not the same as the original emulation type of the S-VOL. The virtual emulation type includes the virtual CVS attribute (-CVS). The storage system does not support LUSE, so LUSE configuration (*n) volumes are not supported as P-VOLs.