Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Planning for TrueCopy

You must plan and prepare primary and secondary systems, pair volumes, data paths, and other elements to use TrueCopy.

Storage system preparation

The following preparations are required for the storage systems in a TrueCopy pair relationship.

  • Device Manager - Storage Navigator must be LAN-attached to the primary system and the secondary system. For details, see the System Administrator Guide for your storage system.
  • The primary and secondary systems must be set up for TrueCopy operations. For details, see Cache and shared memory requirements. Make sure to consider the amount of Cache Residency Manager data that will be stored in cache when determining the amount of cache for TrueCopy operations.
  • Make sure that the storage system is configured to report sense information by connecting storage system and host. It is required to connect the host to both the primary and secondary systems. If dedicated host cannot be connected to secondary system, connect secondary system and host at primary site.
  • Install the data path between the primary and secondary systems. Distribute data paths between different storage clusters and extenders or switches to provide maximum flexibility and availability. The remote paths between the primary and secondary systems must be different than the remote paths between the host and secondary system. For details, see Data path requirements and configurations.

Cache and shared memory requirements

Cache must be operable for the primary and secondary systems. If not, pairs cannot be created. The secondary system cache must be configured to adequately support TrueCopy remote copy workloads and any local workload activity.

Note the following:

  • VSP E series :

    Only Basic shared memory can be used with TrueCopy. Adding shared memory expands the capacity of the pairs to be created.

  • VSP E series: Cache and shared memory that is no longer necessary can be removed.
Note

Neither cache nor shared memory can be added to or removed from the storage system when pair status is COPY. When either of these tasks is to be performed, first split any pairs in COPY status, and then resynchronize the pairs when the cache or shared memory operation is completed.

Adding and removing cache memory

Use the following workflow to add or remove cache memory in a storage system in which TC pairs already exist:

  1. Identify the status of the TC volumes in the storage system.
  2. If a TC volume is in the COPY status, wait until the status changes to PAIR, or split the TC pair.

    Do not add or remove cache memory when any volumes are in the COPY status.

  3. When the status of all volumes has been confirmed, cache memory can be added to or removed from the storage system by your service representative. Contact customer support for adding or removing cache memory.
  4. After the addition or removal of cache memory is complete, resynchronize the pairs that you split in step 2.

Adding shared memory

Use the following workflow to add shared memory to a storage system in which TC pairs already exist:

  1. Identify the status of the TC volumes in the storage system.
  2. If a TC volume is in the COPY status, wait until the status changes to PAIR, or split the TC pair.

    Do not add shared memory when any volumes are in the COPY status.

  3. When the status of all volumes has been confirmed, shared memory can be added to the storage system by your service representative. Contact customer support for adding shared memory.
  4. After the addition of shared memory is complete, resynchronize the pairs that you split in step 2.

Removing shared memory

You can remove shared memory if it is redundant.

Procedure

  1. Identify the status of all volumes in the storage system.

  2. If a volume is used by a TC pair, delete the TC pair.

  3. Shared memory can be removed from the storage system by your service representative. Contact customer support for removing shared memory.

Requirements for pairing VSP 5000 series with other storage systems

You can pair VSP 5000 series volumes with volumes in other storage systems. For details about available combinations of storage systems including the supported microcode versions, see System requirements and specifications.

Note When specifying the VSP 5000 series serial number using CCI, add a "5" at the beginning of the serial number. For example, if the serial number is 12345, enter 512345.

VSP 5100 or VSP 5500 can create a TrueCopy pair with VSP E series, VSP G/F350, G/F370, G/F700, G/F900, VSP G1x00, VSP F1500, VSP 5000 series. VSP 5200 or VSP 5600 can create a TrueCopy pair with VSP G1x00, VSP F1500, VSP 5000 series, VSP E series.

When connecting to VSP E series, VSP G/F350, G/F370, G/F700, G/F900, the CTG ID for the P-VOL and the S-VOL must be the same. The range of values for the ID is as follows:

  • When connecting to VSP G350, VSP G370, VSP G700, VSP F350, VSP F370, VSP F700: 0 to 127
  • When connecting to VSP E990, VSP F900 or VSP F900: 0 to 255
  • When connecting to VSP E590 or VSP E790: 0 to 127

Remote replication options

Synchronous copy operations affect the I/O performance on the host and on the primary and secondary systems. TrueCopy provides options for monitoring and controlling the impact of copy operations and for maximizing the efficiency and speed of copy operations to achieve the best level of backup data integrity. You can set the following remote replication options:

To optimize performance you also need to determine the proper bandwidth for your workload environment. For details, see Analyzing workload and planning data paths.

Round trip time option

When you set up the TrueCopy association between the primary and secondary systems, you specify a time limit in milliseconds (ms) for data to travel from the P-VOL to the S-VOL, which is called the round trip (RT) time. RT time is used to control the initial copy pace while update copy operations are in progress.

Note
  • If the difference between the RT time you set and the remote I/O response time is significant, the storage system slows down or can even interrupt the initial copy operation.

    An example of a significant difference is 1 ms RT time and 500 ms remote I/O response time.

  • If the difference between the RT time and the remote I/O response time is insignificant, initial copying is allowed to continue at the specified pace.

    An example of an insignificant difference is 1 ms RT time and 5 ms remote I/O response time.

  • You can adjust the RT time when the distance between the primary and secondary systems is long, or when there is a delay caused by the line equipment. There can be a delay in completing the initial copy operation if it is performed with the default RT time instead of the appropriate value.
  • The default RT time is 1 ms.

RT time can be set between 1 ms and 500 ms, depending on the following scenarios:

The following equation lets you set the appropriate RT time, in ms:

RT-time = RT-time-between-the-primary-and-secondary-storage-systems × number-of-responses + initial-copy-response-time (ms)

If the physical path between the primary and secondary storage systems uses Fibre Channel technology, the number of responses depends on the host mode option (HMO) 51 setting.

Host mode option 51

Number of responses

OFF 2
ON 1

When HMO 51 is OFF (default), you must double the RT time because each data transfer between the primary and secondary storage systems involves two response sequences for each command issued.

When HMO 51 is ON, you do not need to double the value of the RT time, because the sequence is one response for each command issued.

If the physical path between the primary and secondary storage systems is an iSCSI, the number of response sequence is determined in proportion to the initial copy speed because the transferred data is divided into 64 KB.

Initial copy speed

Number of responses

1 6
2 10
3 14
4 18
  • Use the ping command when setting the RT time, or contact customer support. If you do not use channel extenders between the primary and secondary systems, specify "1".
  • The initial-copy-response-time is the response time required for multiple initial copy operations.

Use the following equation to determine the initial copy response time of the initial copy pace, the number of maximum initial copy, and the bandwidth of the channel extender communication lines between the primary and secondary systems.

Initial copy response time equation

Notes:

  1. When you connect the primary system and secondary system without channel extenders, set the data path speed between the primary and secondary systems to one of the following values according to link speed:
    • 4 Gbps: 0.34 MB/ms
    • 8 Gbps: 0.68 MB/ms
    • 10 Gbps: 0.85 MB/ms (VSP 5000 series)
    • 16 Gbps: 1.36 MB/ms
    • 32 Gbps: 2.72 MB/ms
  2. For details about initial-copy-pace, see the next table.
  3. For maximum-initial-copy-activities, use the value set up per storage system. The default is 64.
  4. Even if the maximum initial copy activities or the number of data-paths-between-primary-and-secondary-systems is larger than 16, specify it as 16.

The following table shows the initial copy pace used in the initial copy response time equation.

Interface

When executing initial copy only

When executing initial copy operations and update copy at the same time

When initial copy pace specified at the time of pair creation is 1 to 4

When initial copy pace specified at the time of pair creation is 5 to 15

When initial copy pace specified at the time of pair creation is 1 to 2

When initial copy pace specified at the time of pair creation is 3 to 15

Device Manager - Storage Navigator

User-specified value

4

User-specified value

2

CCI

User-specified value

4

User-specified value

2

The following table shows examples for RT time settings for multiple initial copy operations.

Round trip time between primary and secondary system (ms)

Data path speed between primary and secondary systems (MB/ms)

Number of data paths between primary and secondary systems

Initial copy pace

Maximum initial copy activities

Round trip time specified (ms)

0

0.1

4

4

64

160

30

0.1

4

4

64

220

100

0.1

4

4

64

360

Minimum number of remote paths option

When you set up the TC association between the primary and secondary systems, you specify the minimum number of remote paths to the secondary system using the Minimum Number of Paths option (range = 1-8, default = 1). If the number of remote paths in Normal status drops below the specified minimum, the primary storage system splits the pairs to prevent remote copy operations from impacting host performance in the primary storage system.

  • To maintain host performance in the primary storage system, set the minimum number of remote paths to at least 2 to ensure that remote copy operations are performed only when multiple paths are available.
  • To continue remote copy operations even when there is only one remote path in Normal status, set the minimum number of remote paths to 1. Use this setting only when keeping pairs synchronized is more important than maintaining high performance in the primary storage system.
NoteYou can use the fence level option to keep a P-VOL and S-VOL synchronized even if the pair is split because the number of remote paths drops below the minimum setting. The fence level setting, which you specify when you create a pair, determines whether the P-VOL continues to accept write I/Os after the pair is split due to an error. For details, see Allowing I/O to the P-VOL after a split: Fence Level options.

Maximum initial copy activities option

TC initial copy activities can impact the performance of the primary site, depending on the amount of I/O activity and the number of pairs being created at the same time. The maximum initial copy activities option allows you to specify the maximum number of concurrent initial copy operations that the storage system can perform. For example, when the maximum initial copy activities is set to 64 and you add 65 TC pairs at the same time, the primary system starts the first 64 pairs and will not start the 65th pair until one of the first 64 pairs is synchronized.

You can also enable or disable the CU option for the maximum initial copy activities setting. If the CU option is enabled, you can specify the maximum concurrent initial copy operations for each CU (range = 1-16, default = 4), and if it is disabled, you cannot specify the setting separately for each CU. If the CU option is enabled and you set a value larger than the system setting for maximum initial copy activities for a CU, the system setting for maximum initial copy activities is observed.

The default maximum initial copy activities setting is 64 volumes. You can set a number from 1 to 512. If the maximum initial copy activities setting is too large, pending processes in the secondary site can increase, and this can impact the remote I/O response time to the update I/Os. You can change this setting using the Edit Remote Replica Options window. For instructions, see Setting the remote replication options.

Blocked path monitoring option

The blocked path monitoring setting allows you to specify the time (in seconds) for the system to monitor blocked paths. The range is from 2 to 45 seconds. The default is 40 seconds.

If all paths become monitored because of a path error, an I/O timeout might occur in the host. Therefore, the time you specify must be less than the host's I/O timeout setting.

If iSCSI is used in a remote path, the blocked path monitoring option must be set to at least 40 seconds (default). If blocked path monitoring is less than 40 seconds, the path might be blocked due to a delay in the network such as many switches in a spanning tree protocol (STP) network or a long distance connection.

Blocked path SIM monitoring option

The blocked path SIM monitoring setting allows you to specify the time (in seconds) for the system to monitor SIMs reported for blocked paths. The range is from 2 to 100 seconds. The default is 70 seconds.

The blocked path SIM monitoring setting must be larger than the blocked path monitoring setting.

Services SIM of remote copy option

The services SIM of remote copy option allows you to specify whether services SIMs are reported to the host. During TC operations, the primary and secondary storage systems generate a service SIM each time the pair status of the P‑VOL or S‑VOL changes for any reason, including normal status transitions (for example, when a newly created pair becomes synchronized). SIMs generated by the primary storage system include the P‑VOL device ID (byte 13), and SIMs generated by the secondary storage system include the S‑VOL device ID (byte 13).

If you enable the services SIM of remote copy option for the storage system, all CUs will report services SIMs to the host. If desired, you can enable this option at the CU level to configure specific CUs to report services SIMs to the host.

Analyzing workload and planning data paths

You can optimize copy operations and system performance by carefully planning bandwidth, number of data paths, number of host interface paths, and number of ports. Check with customer support for more information.

  • Analyze write-workload. You need to collect workload data (MB/s and IOPS) and analyze your workload to determine the following parameters:
    • Amount of bandwidth
    • Number of data paths
    • Number of host interface paths
    • Number of ports used for TrueCopy operations on the primary and secondary systems

    Thorough analysis and careful planning of these key parameters can enable your system to operate free of bottlenecks under all workload conditions.

  • If you are setting up TrueCopy for disaster recovery, make sure that secondary systems are attached to a host server to enable both the reporting of sense information and the transfer of host failover information. If the secondary site is unattended by a host, you must attach the secondary storage systems to a host server at the primary site so that the system administrator can monitor conditions at the secondary site.

Data path requirements and configurations

A data path must be designed to adequately manage all possible amounts of data that could be generated by the host and sent to the P-VOL and S-VOL. This topic provides requirements and planning considerations for the following key elements of the data path:

Note
  • Create at least two independent data paths (one per cluster) between the primary and secondary systems for hardware redundancy for this critical element.
  • When creating more than 4,000 pairs, restrict the number of pairs so that a maximum of 4,000 pairs use one physical path to distribute the loads on the physical paths.
  • In a disaster recovery scenario, the same write-workload will be used in the reverse direction. Therefore, when planning TrueCopy for disaster recovery, configure the same number of secondary-to-primary data paths as primary-to-secondary copy paths to maintain normal operations during disaster recovery. Reverse direction paths must be set up independently of the primary-to-secondary paths.
  • When you set up secondary-to-primary data paths, specify the same combination of CUs or CU Free and the same path group ID as specified for the primary-to-secondary paths.

Bandwidth requirements

Sufficient bandwidth must be present to handle data transfer of all workload levels. The amount of bandwidth required for your TrueCopy system is based on the amount of I/O sent from the host to the primary system. You determine required bandwidth by measuring write-workload. Workload data is collected using performance monitoring software. Consult customer support for more information.

Fibre Channel requirements

The primary and secondary systems must be connected using multimode or single-mode optical fibre cables. As shown in the following table, the cables and data path relay equipment required depend on the distance between the P-VOL and S-VOL storage systems.

Distance

Fibre cable type

Data path relay equipment

0 km to 1.5 km (4,920 feet)

Multimode shortwave Fibre Channel interface cables.

Switch is required for 0.5 km to 1.5 km.

1.5 km to 10 km (4,920 feet to 6.2 miles)

Single-mode longwave2 optical fibre cables.

Not required.

10 km to 30 km (6.2 miles to 18.6 miles)

Single-mode longwave2 Fibre Channel interface cables.

Switch is required.

Greater than 30 km (18.6 miles)1

Communications lines are required.

Approved third-party channel extender products.3

Notes:

  1. TrueCopy operations typically do not exceed 30 km.
  2. Longwave cannot be used for FCoE.
  3. For more information about approved channel extenders, contact Hitachi Vantara.

With Fibre Channel connections using switches, no special settings are required for the physical storage system.

Direct connections up to 10 km with single-mode longwave Fibre Channel interface cables are supported. Link speed determines the maximum distance you can transfer data and still achieve good performance. The following table shows maximum distances at which performance is maintained per link speed, over single-mode longwave Fibre Channel.

Link speed

Distance maximum performance maintained

4 Gbps

3 km

8 Gbps

2 km

16 Gbps

1 km

32 Gbps

0.6 km

Customer support can provide the latest information about the availability of serial-channel TrueCopy connections.

Supported data path configurations for Fibre Channel

Three Fibre Channel configurations are supported for TrueCopy:

Hitachi Device Manager - Storage Navigator or CCI is used to set port topology.

For direct and switch connections, host I/O response time can be improved on long distance connections (longwave, up to 10 km for direct connection and 100 km for switch connection) by improving the I/O response time between storage systems and by using host mode option 51, the Round Trip Set Up option. Depending on the storage systems you are using, the function must be set either on both primary and secondary systems or only on the secondary storage system.

  • VSP G1x00, and VSP F1500: Has to be set on both primary and secondary systems.
  • VSP 5000 series, VSP E series, VSP G/F350, G/F370, G/F700, G/F900, VSP G200, G400, G600, G800, VSP F400, F600, F800: Can be set on the secondary system only, but it is advantagous to set HMO51 on both primary and secondary systems in situations in which your systems reverse directions (the primary becomes the secondary), for example.

A Hitachi Vantara-approved channel extender is required.

Direct connection

The following figure shows a direct connection, in which two devices are connected directly together.

GUID-01CC07EC-71FF-41CD-82BB-558366309558-low.png

As shown in the following table, Fab settings, topology settings, and available link speed depend on the settings of the packages and protocols used for the storage system connections, as well as whether the host mode option 51 is used. Host Mode Option 51 (Round Trip Set Up) improves host I/O response time for long distance (10 km) switch connections.

NoteIf you connect storage systems using iSCSI, host mode option settings become invalid.

Package name

Protocol

Host mode option 51

Fab

Bidirectional port topology

Available link speed

CHB(FC32G)

32GbpsFC

OFF

OFF

FC-AL

  • 4 Gbps
  • 8 Gbps

ON

OFF

FC-AL

  • 4 Gbps
  • 8 Gbps

OFF

OFF

Point-to-Point

  • 16 Gbps
  • 32 Gbps

ON

OFF

Point-to-Point

  • 16 Gbps
  • 32 Gbps

Switch connection

The following figure shows a switch connection. GUID-7081C4C1-5D69-4D78-9446-7690EABA0274-low.png

Some switch vendors require F port connectivity (for example, McData ED5000).

As shown in the following table, Fab settings, topology settings, and available link speed depend on the settings of the packages and protocols used for the storage system connections, as well as whether the host mode option 51 is used.

For details about HMOs, see the Provisioning Guide for the storage system.

Package name

Protocol

HMO 51 setting

Fabric setting

Topology: Initiator and RCU Target

Link speed that can be specified

CHB(FC32G)

32GbpsFC

OFF

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps
  • 32 Gbps
CHB(FC32G) (VSP G/F350, G/F370, G/F700, G/F900, VSP E series)

32GbpsFC

ON

ON

Point-to-Point

  • 4 Gbps
  • 8 Gbps
  • 16 Gbps
  • 32 Gbps
CautionHMO51 should only be used with configurations more than 1 km in distance. If HMO51 is set with a distance of less than 1 km, a data transfer error might occur on the secondary system.

Extender connection

The following figure shows an extender connection, in which channel extenders and switches are used to connect the devices across large distances. Host Mode Option 51 (Round Trip Set Up) improves host I/O response time for long distance (100 km) switch connections. Make sure that the extender supports remote I/O. For more information contact customer support.

GUID-BCE78FF4-BF63-4E02-85C3-7E6541FCFF6B-low.gif

Set the Fabric to ON for the bidirectional port, and then set the topology to Point-to-Point.

CautionData traffic might concentrate on one switch when you perform the following actions:
  • Use a switch to connect the primary system and the secondary systems with an extender
  • Gather several remote copy paths in one location

If you are using a Hitachi switch to make the connection, contact customer support.

iSCSI requirements and cautions

For the iSCSI interface, direct, switch, and channel extender connections are supported. The following table lists the requirements and cautions for systems using iSCSI data paths. For details about the iSCSI interface, see the Provisioning Guide for your storage system.

Item

Requirement

iSCSI front-end director

The 8IS10 (10 Gbps) front-end director (FED) is required for remote copy connections.

Remote paths

Add only remote paths of the same protocol to a single path group. Make sure that Fibre Channel and iSCSI remote paths are not mixed in a path group.

If iSCSI is used for a remote path, the blocked path monitoring remote replica option must be set to at least 40 seconds (default). If blocked path monitoring is less than 40 seconds, the path might be blocked due to a delay in the network such as many switches in a spanning tree protocol (STP) network or a long distance connection. For instructions, see Setting the remote replication options.

Physical paths

  • Before replacing Fibre Channel or iSCSI physical paths, remove the TC pair and the remote path that are using the physical path to be replaced.
  • It is recommended that you use the same protocol in the physical paths between the host and the storage system and between storage systems.

    As in the example below, if protocols are mixed, set the same or a greater command timeout value between the host and a storage system than between storage systems.

    Example:

    - Physical path between the host and a storage system: Fibre Channel

    - Physical path between storage systems: iSCSI

Ports

  • When the parameter settings of an iSCSI port are changed, the iSCSI connection is temporarily disconnected and then reconnected. To minimize the impact on the system, change the parameter settings when the I/O load is low.
  • If you change the settings of an iSCSI port connected to the host, a log might be output on the host, but this does not indicate a problem. In a system that monitors system logs, an alert might be output. If an alert is output, change the iSCSI port settings, and then check if the host is reconnected.
  • When you use an iSCSI interface between storage systems, disable Delayed ACK (Edit Ports window in HDvM - SN or raidcom modify port -delayed_ack_mode disable). By default, Delayed ACK is enabled.

    If Delayed ACK is enabled, it might take time for the host to recognize the volume used by a TC pair. For example, when the number of volumes is 2,048, it takes up to 8 minutes.

  • Do not change the default setting (enabled) of Selective ACK for ports.
  • In an environment in which a delay occurs in a line between storage systems, such as long-distance connections, you must set an optimal window size of iSCSI ports in storage systems at the primary and secondary sites after verifying various sizes. The maximum value you can set is 1,024 KB. The default window size is 64 KB, so you must change this setting.
  • iSCSI ports do not support fragment processing (dividing a packet). When the maximum transmission unit (MTU) of a switch is smaller than that of an iSCSI port, packets might be lost, and data cannot be transferred correctly. The MTU value for the switch must be the same as or greater than the MTU value for the iSCSI port. For details of the MTU setting and value, see the user documentation for the switch.

    The MTU value for the iSCSI port must be greater than 1500. In a WAN environment in which the MTU value is 1500 or smaller, fragmented data cannot be transferred. In this case, lower the maximum segment size (MSS) of the WAN router according to the WAN environment, and then connect to an iSCSI port. Alternatively, use a WAN environment in which the MTU value is greater than 1500.

  • When using a remote path on the iSCSI port for which virtual port mode is enabled, use the information about the iSCSI port that has virtual port ID (0). You cannot use virtual port IDs other than 0 as a virtual port.
  • On VSP Gx00 models and VSP Fx00 models, and VSP E series, a port can be used for connections to the host (target attribute) and to a storage system (initiator attribute). However, to minimize the impact on the system, if a failure occurs either on the host or in a storage system, you should connect the port for the host and for the storage system to separate CHBs.

Network setting

  • Disable the spanning tree setting for a port on a switch connected to an iSCSI port. If the spanning tree function is enabled on a switch, packets do not loop through a network when the link is up or down. When this happens, packets might be blocked for about 30 seconds. If you need to enable the spanning tree setting, enable the Port Fast function of the switch.
  • In a network path between storage systems, if you use a line that has a slower transfer speed than the iSCSI port, packets are lost, and the line quality is degraded. Configure the system so that the transfer speed for the iSCSI ports and the lines is the same.
  • Delays in lines between storage systems vary depending on system environments. Validate the system to check the optimal window size of the iSCSI ports in advance. If the impact of the line delay is major, consider using devices for optimizing or accelerating the WAN.
  • When iSCSI is used, packets are sent or received using TCP/IP. Because of this, the amount of packets might exceed the capacity of a communication line, or packets might be resent. As a result, performance might be greatly affected. Use Fibre Channel data paths for critical systems that require high performance.

Fibre Channel used as remote paths

Before configuring a system using Fibre Channel, there are restrictions that you need to consider.

For details about Fibre Channel, see the Provisioning Guide for your system.

  • When you use Fibre Channel as a remote path, if you specify Auto for Port Speed, specify 10 seconds or more for Blocked Path Monitoring. If you want to specify 9 seconds or less, do not set Auto for Port Speed.
  • If the time specified for Blocked Path Monitoring is not long enough, the network speed might be slowed down or the period for speed negotiation might be exceeded. As a result, paths might be blocked.

Ports

Data is transferred along the data path from the bidirectional ports on the primary storage system to the bidirectional ports on the secondary storage systems. The amount of data each of these ports can transfer is limited.

Therefore, you must know the amount of data that will be transferred (that is, the write-workload) during peak periods. You can then ensure that bandwidth meets data transfer requirements, that the primary storage system has a sufficient number of bidirectional ports, and that the secondary storage system has a sufficient number of bidirectional ports to handle peak workload levels.

Port requirements (VSP 5000 series)

Data is sent from the primary storage system port through the data path to the port on the secondary storage system. The reverse is also available.

  • One secondary system port can be connected to a maximum of 16 ports on a primary system.
  • The number of remote paths that can be specified does not depend on the number of ports configured for TrueCopy. You can specify the number of remote paths for each remote connection.
  • Do not add or delete a remote connection or add a remote path at the same time that the SCSI path definition function is in use.

Port attributes (VSP 5000 series)

Plan and define the following Fibre Channel port attributes for TrueCopy:

  • Target port: Connects the storage system and a host. When the host issues a write request, the request is sent to a volume on the system through a target port on the storage system.
  • Bidirectional port: Connects the remote copy and external storage systems as a Initiator port or a Target port. This port has the following three kinds of port attributes. The host server connections can be shared through the port set as Bidirectional, however this is not recommend for improving performance.
    • Initiator ports, which send data. One initiator port can be connected to a maximum of 64 RCU target ports. Configure initiator ports on both the primary and secondary systems for TrueCopy disaster recovery operations.
    • RCU target ports, which receive data. Configure RCU target ports on both the primary and secondary systems for TrueCopy disaster recovery operations.
    • External port: Required for Universal Volume Manager copy operations. This port is not used for TrueCopy copy operations.

Pair and pair volumes planning

Before you create pairs and pair volumes, you should understand requirements, options, and settings that you need.

Pair volume requirements and recommendations

The following is provided to help you prepare TrueCopy volumes:

  • A volume can be assigned to only one pair.
  • Logical units on the primary and secondary storage systems must be defined and formatted prior to pairing.
  • The P-VOL and S-VOL must have the same capacity.
  • TrueCopy requires a one-to-one relationship between the P-VOL and S-VOL. The P-VOL cannot be copied to more than one S-VOL, and an S-VOL cannot have more than one P-VOL.
  • Logical Unit (LU) types
    • (VSP 5000 series) TrueCopy supports the basic LU types that can be configured on VSP G1x00 and VSP F1500 (for example, OPEN-3, OPEN-E, OPEN-8, OPEN-9, OPEN-L, and OPEN-V).
    • Pair volumes must consist of LUs of the same type and capacity (for example, OPEN-3 to OPEN-3).
    • (VSP 5000 series) Multi-platform volumes (for example, 3390-3A/B/C) cannot be assigned to pairs. Contact customer support for the latest information about supported devices for your platform.
  • TrueCopy operates on volumes rather than on files. Multi-volume files require special attention. For complete duplication and recovery of a multi-volume file (for example, a large database file that spans several volumes), make sure that all volumes of the file are copied to TrueCopy S-VOLs.
  • TrueCopy pair volumes can be shared with non-TrueCopy Hitachi software products. For details, see Sharing TrueCopy volumes.
  • TrueCopy supports Virtual LVI/LUN. This allows you to configure LUs that are smaller than fixed-size LUs. When custom-size LUs are assigned to a TrueCopy pair, the S-VOL must have the same capacity as the P-VOL.
  • Before creating multiple pairs during the Create Pairs operation, make sure to set up S-VOL LUNs to allow the system to correctly match them to P-VOLs.

    In HDvM - SN, even though you select multiple volumes as P-VOLs, you specify just one S-VOL. The system automatically assigns subsequent secondary system LUs as S-VOLs based on the option you specify for Selection Type. These are the options:

    • Interval: The interval you specify will be skipped between LU numbers in the secondary system.

      For example, suppose you specify LU 01 as the initial (base) S-VOL, and specify 3 for Interval. This results in secondary system LU 04 being assigned to the next P-VOL, 07 assigned to the subsequent P-VOL, and so on. To use Interval, you set up secondary system LU numbers according to the interval between them.

    • Relative Primary Volume. The difference is calculated between the LUN numbers of two successive P-VOLs. S-VOLs are assigned according to the closest LUN number.

      For example, if the LUN numbers of three P-VOLs are 1, 5, and 6 and you set LUN numbers for the initial S-VOL (Base Secondary Volume) at 2, the LUN numbers of the three S-VOLs will be set at 2, 6, and 7, respectively.

  • Because the contents of the P-VOL and S-VOL are identical, the S-VOL can be considered a duplicate of the P-VOL. Because the host operating system does not allow duplicate volumes, the host system administrator must take precautions to prevent system problems related to duplicate volumes. You must define the S-VOLs so they do not auto mount or come online to the same host at the same time as the P-VOLs.

    TrueCopy does not allow the S-VOL to be online (except when the pair is split). If the S-VOL is online, the TrueCopy paircreate operation will fail.

    CautionWhen P-VOLs and S-VOLs are connected to the same hosts, define the S-VOLs to remain offline even after the hosts are restarted. If a pair is released and a host is subsequently restarted, the S-VOL should remain offline to prevent errors due to duplicate volumes.

Allowing I/O to the S-VOL

By specifying the Read option for the S-VOL, you can read the S-VOL from the host while the pair is split without deleting the pair from the secondary storage system. If you split the pair by specifying the Secondary Volume Write option in HDvM - SN, or by using the pairsplit -rw command in CCI, you can write to the S-VOL. In this case, S-VOL and P-VOL track maps keep track of differential data and are used to re-synchronize the pair. Enabling Secondary Volume Write is done during the pairsplit operation.

  • You can write to S-VOL when the split operation is performed from the primary system.
  • When you resync a pair with the Secondary Volume Write option enabled, the secondary system sends S-VOL differential data to the primary system. This data is merged with P-VOL differential data, and out-of sync tracks are determined and updated on both systems, thus ensuring proper resynchronization.

Allowing I/O to the P-VOL after a split: Fence Level options

You can specify whether the host is denied access or continues to access the P-VOL when the pair is split due to an error. This is done with the Primary Volume Fence Level setting. You specify one of the following Fence Level options during the initial copy and resync operations. You can also change the Fence Level option outside these operations.

  • Data – the P-VOL is fenced if an update copy operation fails. This prevents the host from writing to the P-VOL during a failure. This setting should be considered for the most critical volumes for disaster recovery. This setting reduces the amount of time required to analyze the consistency of S-VOL data with the P-VOL during disaster recovery efforts.
  • Status – the P-VOL is fenced only if the primary system is not able to change S-VOL status to Suspend when an update copy operation fails. If the primary system successfully changes S-VOL pair status to Suspend, subsequent write I/O operations to the P-VOL will be accepted, and the system will keep track of updates to the P-VOL. This allows the pair to be resynchronized quickly. This setting also reduces the amount of time required to analyze S-VOL consistency during disaster recovery.
  • Never – the P-VOL is never fenced. This setting should be used when I/O performance outweighs data recovery. "Never" ensures that the P-VOL remains available to applications for updates, even if all TrueCopy copy operations have failed. The S-VOL might no longer be in sync with the P-VOL, but the primary system keeps track of updates to the P-VOL while the pair is suspended. Host failover capability is essential if this fence level setting is used. For disaster recovery, the consistency of the S-VOL is determined by using the sense information transferred by host failover or by comparing the S-VOL contents with other files confirmed to be consistent.

Differential data

Differential data is managed with bitmaps in units of tracks. Tracks that receive a write instruction while a pair is being split are managed as differential bitmaps.

With storage systems, data is stored in units of tracks using bitmaps, and is then used to resynchronize the pair.

  • If your primary system is other than VSP G1x00, and VSP F1500, and the secondary system is VSP G1x00, or VSP F1500, specify track as the differential data management unit in the primary system. VSP 5000 series support only tracks. Therefore, if you specify cylinders, TCz pairs cannot be created.
  • If you use CCI, even though CCI allows you to specify track or cylinder, only track will be used.
  • If you are making a TC pair with DP-VOL whose size is larger than 4,194,304 MB (8,589,934,592 blocks), the differential data is managed by the pool to which the TC pair volume is related. In this case, additional pool capacity (up to 4 pages, depending on the software configuration) is required for each 4,123,168,604,160-byte increase in user data.
    NoteThe following procedure is to release the differential data (pages) managed by the pool:
    1. Delete all the pairs that use the V-VOL for which you want to release the pages.
    2. Set system option mode 755 to OFF.

      This action enables zero pages to be reclaimed.

    3. Restore the blocked pool.
    4. Release the V-VOL pages.

      For Device Manager - Storage Navigator, use the Reclaim Zero Pages window.

      For CCI, use the raidcom modify ldev command.

    You need to release differential data pages when you downgrade to the firmware version that does not support TC pair creation using DP-VOLs larger than 4,194,304 MB. The amount of time it takes to release differential data pages depends on the number of specified volumes, DP-VOL capacity, the number of allocated pages, and the storage system's workload. It also depends on the type of storage system. In some cases, it could take days to release all the differential data pages.

  • After you create a TC pair with DP-VOL whose size is larger than 4,194,304 MB (8,589,934,592 blocks), data management might fail due to the insufficient pool capacity. In this case, all the P-VOL data is copied to the S-VOL in units of tracks when you perform the TC pair resync operation.

Maximum number of pairs supported

The maximum number of pairs per storage system is subject to restrictions, such as the number of cylinders used in volumes or the number of bitmap areas used in volumes. The maximum number of pairs might be smaller than the number listed in the System requirements and specifications table because the amount of used bitmap area differs depending on the user environment (volume size).

TrueCopy supports a maximum of 65,280 pairs. If Command Control Interface is used, a command device must be defined for each product and the maximum number of pairs is calculated by subtracting 1 from the maximum number of pairs shown in the specification.

The number of LDEVs, not the number of LUs, is used to determine the maximum number of pairs.

Calculating the maximum number of pairs

It is necessary to calculate the maximum number of pairs you can have on the storage system. The maximum number is based on the following:

  • The number of cylinders in the volumes, which must be calculated.
  • The number of bitmap areas required for a TrueCopy volume, which is calculated using the number of cylinders.

    If the volume size is larger than 4,194,304 MB (8,589,934,592 blocks), the bitmap area is not used. Therefore, you do not need to calculate the maximum number of pairs when creating TC pairs with a DP-VOL whose size is larger than 4,194,304 MB (8,589,934,592 blocks).

NoteIn the following formulas: for ceil(), round up the result within the parentheses to the nearest integer, and for floor(), round down the result within the parentheses to the nearest integer.

Procedure

  1. Calculate the number of cylinders.

    1. Calculate the system's number of logical blocks, which is the volume capacity measured in blocks.

      Number of logical blocks = Volume capacity (bytes) / 512
    2. Calculate the number of cylinders.

      For OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, OPEN-K:

      Number of cylinders = ceil ( (ceil (Number of logical blocks / 96) ) / 15)

      For OPEN-V:

      Number of cylinders = ceil ( (ceil (Number of logical blocks / 512) ) / 15)

  2. Calculate the number of bitmap areas per volume.

    In the following calculation, differential data is measured in bits. 122,752 bits is the amount of differential data per bitmap area.

    For OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, OPEN-K, and OPEN-V:

    Number of bitmap areas = ceil ( (Number of cylinders × 15) / 122,752)

    Note

    Performing this calculation for multiple volumes can result in inaccuracies. Perform the calculation for each volume separately, and then total the bitmap areas. The following examples show correct and incorrect calculations. Two volumes are used: one volume of 10,017 cylinders, and another volume of 32,760 cylinders

    Correct calculation

    ceil ((10,017 × 15) / 122,752) = 2

    ceil ((32,760 × 15) / 122,752) = 5

    Total: 7

    Incorrect calculation

    10,017 + 32,760 = 42,777 cylinders

    ceil ((42,777 × 15) / 122,752) = 6

    Total: 6

  3. Calculate the maximum number of pairs, which is restricted by the following:

    • The number of bitmap areas required for TrueCopy (calculated above).
    • The total number of bitmap areas in the storage system. The number of bitmap areas is as follows:
      • VSP 5000 series: 65,536
      • VSP E series: 65,536
      • VSP G370, VSP G700, VSP G900, VSP F370, VSP F700, VSP F900: 65,536
      • VSP G350, VSP F350: 36,000

      Bitmap areas are also used by TrueCopy for Mainframe, Universal Replicator, Universal Replicator for Mainframe, and global-active device.

      Therefore, when you use these software applications together, reduce the number of bitmap areas for each software application from the total number of bitmap areas for the storage system before calculating the maximum number of pairs for TrueCopy in the following formula.

      Also, when TrueCopy and Universal Replicator, Universal Replicator for Mainframe share the same volume, regardless of whether the shared volume is primary or secondary, reduce the number of bitmap areas for each software application from the total number of bitmap areas for the storage system before calculating the maximum number of pairs for TrueCopy in the following formula. For more information on calculating the number of bitmap areas required for each software application, see the relevant user guide.

    Use the following formula:

    Maximum number of pairs = floor (Total number of bitmap areas in the storage system / Required number of bitmap areas)

    (VSP 5000 series) If the calculated maximum number of pairs exceeds the total number of LDEVs of the storage system, and the total number of LDEVs of the storage system is less than 65,280, then the total number of LDEVs of the storage system becomes the maximum number of pairs.

    (VSP E series, VSP G/F350, G/F370, G/F700, G/F900) Calculate the maximum number of pairs using the already calculated necessary number of bitmap areas and the number of bitmap areas in storage systems listed in the following table. The number of bitmap areas in a storage system is determined by the storage system model and the availability of control memory extended for TC.

    (VSP E series, VSP G350, G370, G700, G900, and VSP F350, F370, F700, F900)

    Extension status of control memory for TC

    Number of bitmap areas in storage systems

    Base (no Extension)

    Varies depending on the model:

    • VSP G350, VSP F350: 3,712
    • VSP G370, VSP G700, VSP F370, VSP F700: 36,000
    • VSP E series, VSP G900, VSP F900: 65,536

    With Extensions

    Varies depending on the model:

    • VSP G350, VSP F350: 36,000
    • VSP E series, VSP G370, VSP G700, VSP G900, VSP F370, VSP F700, VSP F900: 65,536

Initial copy priority option and scheduling order

When you create more pairs than the maximum initial copy activities , you can control the order in which the initial copy operations are performed using the Initial Copy Priority option.

The following two examples illustrate how to use the Initial Copy Priority option.

NoteThe Initial Copy Priority option can be specified only by using HDvM - SN. When you create pairs using CCI, the initial copy operations are performed according to the order in which the commands are issued.
Example 1: Creating more pairs than the Maximum Initial Copy Activities setting

In this example, you are creating four pairs at the same time, and the Maximum Initial Copy Activities option is set to 2. To control the order in which the pairs are created, you set the Initial Copy Priority option in the Create TC Pairs window as shown in the following table.

P-VOL

Initial Copy Priority setting

LDEV 00

2

LDEV 01

3

LDEV 02

1

LDEV 03

4

The following table shows the order in which the initial copy operations are performed and the Initial Copy Priority settings for the P-VOLs.

Order of the initial copy operations

P-VOL

Initial Copy Priority setting

1

LDEV 02

1

2

LDEV 00

2

3

LDEV 01

3

4

LDEV 03

4

Because the Maximum Initial Copy Activities setting is 2, the initial copy operations for LDEV 02 and LDEV 00 are started at the same time. When one of these initial copy operations is completed, the initial copy operation for LDEV 01 is started. When the next initial copy operation is completed, the initial copy operation for LDEV 03 is started.

Example 2: New pairs added with initial copy operations in progress

In this example, you have already started the initial copy operations for the four pairs shown above (LDEVs 00-03) with the Maximum Initial Copy Activities option set to 2, and then you create two more pairs (LDEVs 10 and 11) while the initial copy operations for the first four pairs are still in progress. To control the order in which the pairs are created, you set the Initial Copy Priority option for the new pairs as shown in the following table.

P-VOL

Initial Copy Priority setting

LDEV 10

2

LDEV 11

1

The two new initial copy operations are started after all four of the previously scheduled initial copy operations are completed. The following table shows the order in which the initial copy operations are performed for all six pairs and the Initial Copy Priority setting for each pair.

Order of the initial copy operations

P-VOL

Initial Copy Priority setting

Remarks

1

LDEV 02

1

Previously scheduled.

2

LDEV 00

2

Previously scheduled.

3

LDEV 01

3

Previously scheduled.

4

LDEV 03

4

Previously scheduled.

5

LDEV 11

1

Scheduled later.

6

LDEV 10

2

Scheduled later.

Restrictions when creating an LU whose LU number is 2048 or greater

A pair can be created using LUs whose LU numbers are 2048 to 4095 if you connect VSP 5000 series, whose DKCMAIN program version is 90-02-0x-xx/xx or later, as the source storage system.

Do not try to create a pair using LUs whose LU numbers are 2048 to 4095 unless the storage system to which you are connecting is also VSP 5000 series, whose DKCMAIN program version is 90-02-0x-xx/xx or later. Failures, such as Pair Suspend, might occur if you try to create a pair using LUs whose LU numbers are 2048 or greater and the storage system to which you are connecting is one of the following:

  • A storage system other than a VSP 5000 series
  • A VSP 5000 series whose DKCMAIN program version is earlier than 90-02-0x-xx/xx.

For VSP 5000 series whose DKCMAIN program version is 90-02-0x-xx/xx or later, up to 4096 LU paths are possible for a Fibre Channel port or iSCSI port.

  • If you set a host group for a Fibre Channel port, up to 4096 LU paths can be set for a host group. In addition, up to 4096 LU paths can be set for a port through the host group.
  • If you configure an iSCSI target for an iSCSI port, you can configure up to 4096 LU paths for an iSCSI target. In addition, up to 4096 LU paths can be set for a port through the iSCSI target.

The following table lists LU numbers that can be used when different target storage systems and DKCMAIN program versions are connected to VSP 5000 series.

Source storage systemTarget storage systemRestrictions
Storage systemProgram version of DKCMAINLU number that can create a pairNumber of LU paths that can be set for a port
VSP 5100, VSP 5500 (90-02-0x-xx/xx or later)VSP G1x00, VSP F1500Earlier than 80-06-7x-xx/xxDisabledDisabled
80-06-7x-xx/xx or later0 to 20470 to 2048
VSP G/F350, G/F370, G/F700, G/F90088-04-0x-xx/xx or later0 to 20470 to 2048
VSP E590, VSP E790, VSP E990Any0 to 20470 to 2048
VSP 5100, VSP 5500Earlier than 90-02-0x-xx/xx0 to 20470 to 2048
90-02-0x-xx/xx or later0 to 40950 to 4096
VSP 5200, VSP 5600Not supportedDisabledDisabled
VSP 5100, VSP 5500 (90-08-02-xx/xx or later)VSP 5200, VSP 5600Any0 to 40950 to 4096
VSP 5200, VSP 5600 (Any version)VSP G1x00, VSP F1500Earlier than 80-06-87-xx/xxDisabledDisabled
80-06-87-xx/xx or later0 to 20470 to 2048
VSP E590, VSP E790, VSP E99093-05-03-xx/xx or later0 to 20470 to 2048
VSP 5100, VSP 5500Earlier than 90-08-01-xx/xxDisabledDisabled
90-08-01-xx/xx or later0 to 40950 to 4096
VSP 5200, VSP 5600Any0 to 40950 to 4096

Consistency group planning

You determine which storage system pairs to include in each consistency group based on business criteria for keeping the status consistent across a group of pairs, and for performing specific operations at the same time on all pairs in the group.

Consistency groups allow you to perform one operation on all pairs in the group at the same time. Consistency groups also ensure that all pairs are managed in a consistent status.

A consistency group has the following characteristics:

  • A maximum of four storage system pairings can be placed in one consistency group.
  • A consistency group can consist of the following:
    • TC pairs only using one primary and one secondary storage system
    • TC pairs only using more than one primary and secondary storage system
    • TCz pairs only using one primary and one secondary storage system
    • TCz pairs only using more than one primary and secondary storage system
    • Both TC and TCz pairs using one primary and one secondary storage system
    • Both TC and TCz pairs using more than one primary and secondary storage system
NoteIf you connect with VSP 5000 series, VSP E series, VSP Gx00 models, or VSP Fx00 models, the CTG ID for the P-VOL and S-VOL in a pair must be the same. The range of values for the ID is as follows:
  • When connecting to VSP 5000 series: 0 to 255
  • When connecting to VSP E990, VSP G900 or VSP F900: 0 to 255
  • When connecting to VSP E590, VSP E790,VSP G350, VSP G370, VSP G700, VSP F350, VSP F370, VSP F700: 0 to 127

Consistency group for pairs in one primary and one secondary storage system

You can create, update, and copy TC pairs or both TC pairs and TCz pairs in a consistency group of one primary storage system and one secondary storage system.

(VSP Gx00 models, VSP Fx00 models, and VSP E series)GUID-0B13730E-C087-4472-98C0-F037ECAD9F46-low.gif

Figure notes:

  1. The TC pair is created in the consistency group specified from CCI.
  2. I/O requests are received from each application in the open-system server to update data in each volume.
  3. The TC copy operation is performed in the consistency group.

    For information on creating a TC pair and assigning it to a consistency group using CCI, see the Command Control Interface User and Reference Guide and the Command Control Interface Command Reference.

(VSP 5000 series) TC and TCz pairs between one primary system and secondary system can be placed in the same consistency group, as shown in the following figure.

GUID-388B5E3B-0A44-4CD2-A572-8F22B8263A05-low.png

Figure notes:

  1. TC pairs are assigned to a consistency group using CCI.

    TCz pairs are assigned to a consistency group using Business Continuity Manager (BCM).

  2. Open and mainframe volumes (P-VOLs) receive I/O requests from their applications at the primary (main) site, and data in the volumes is updated.
  3. TC or TCz runs copy operations in the consistency group.

    For information on creating a TC pair and assigning it to a consistency group using CCI, see the Command Control Interface User and Reference Guide and the Command Control Interface Command Reference.

    For information on creating a TCz pair and assigning it to a consistency group using BCM, see the Hitachi Business Continuity Manager User Guide and the Hitachi Business Continuity Manager Reference Guide.

Consistency group for pairs in multiple primary and secondary storage systems

You can create, update, and copy TC pairs or both TC pairs and TCz pairs in a consistency group of multiple primary storage systems and multiple secondary storage systems.

(VSP Gx00 models, VSP Fx00 models, and VSP E series) GUID-36823173-3556-4333-966A-EEAEA3248FF9-low.png

Figure notes:

  1. CCI uses a consistency group that consists of multiple primary and secondary storage systems. Business Continuity Manager cannot be used with multiple systems.
  2. I/O requests are received from each application in the open-system server to update data in each volume.
  3. The TC copy operation is performed in the consistency group.

    When a pair is created, the pair is assigned to a consistency group. For information on creating consistency groups of multiple primary and secondary storage systems and assigning TC pairs to a consistency group, see the Command Control Interface Installation and Configuration Guide and the Command Control Interface Command Reference.

(VSP 5000 series) TC and TCz pairs in multiple primary and secondary systems can be placed in the same consistency group. A maximum of four storage system pairings can be placed in the same consistency group.

In a consistency group for multiple primary and secondary storage systems, you cannot use Business Continuity Manager to perform operations, including registrations, for TrueCopy for Mainframe pairs.

GUID-F405156B-1D9B-4CFE-A43A-ACD89A262C5B-low.png

Figure notes:

  1. CCI manages the consistency group that contains multiple storage systems.
  2. Open and mainframe primary volumes (P-VOLs) receive I/O requests from their applications at the primary (main) site, and data in the volumes is updated.
  3. TrueCopy or TrueCopy for Mainframe runs the copy operation in the consistency group.

When the open or mainframe host system guarantees the update order, data consistency in P-VOLs and S-VOLs is ensured. When the host system does not guarantee update order, data consistency is not ensured.

System configurations for consistency groups

Data consistency between secondary volumes in a consistency group of multiple primary and secondary storage systems is guaranteed for various system configurations.

System configuration Update sequence of data in a higher system* Guaranteed range of data consistency between secondary volumes
Open server only Update sequence of data is guaranteed between servers TC secondary volumes in multiple storage systems at secondary sites
(VSP 5000 series) Open server/mainframe host mixed Update sequence of data is guaranteed between open servers and mainframe hosts TC, TCz secondary volumes in multiple storage systems at secondary sites
Update sequence of data is not guaranteed between open servers and mainframe hosts No consistency between TC and TCz secondary volumes
Update sequence of data is guaranteed between open servers TC secondary volumes in multiple storage systems at secondary sites
Update sequence of data is guaranteed between mainframe hosts TCz secondary volumes in multiple storage systems at secondary sites
(VSP 5000 series) Mainframe host only Update sequence of data is guaranteed between mainframe hosts TCz secondary volumes in multiple storage systems at secondary sites
* If the update sequence of data in a higher system is not guaranteed (data update sequence is unnecessary), data consistency between secondary volumes is not guaranteed.

Registering pairs to a new consistency group when creating a new TC pair

You can configure a consistency group of multiple primary and secondary storage systems when creating new TC pairs.

The consistency group of multiple primary and secondary storage systems can consist of TC pairs only.

Procedure

  1. Create CCI configuration definition file C for a configuration of multiple primary and secondary storage systems.

  2. Specify the consistency group for registration, and register TC or TCz pairs using configuration definition file C created in step 1.

Registering pairs to a new consistency group when creating a new TC or TCz pair

You can configure a consistency group of multiple primary and secondary storage systems when creating new TC or TCz pairs.

The consistency group of multiple primary and secondary storage systems can consist of a combination of TC and TCz pairs.

Procedure

  1. Create CCI configuration definition file C for a configuration of multiple primary and secondary storage systems.

  2. Specify the consistency group for registration, and register TC or TCz pairs using configuration definition file C created in step 1.

Registering pairs to a new consistency group when using existing TC pairs

You can configure a consistency group of multiple primary and secondary storage systems when using existing TC pairs.

The consistency group of multiple primary and secondary storage systems consists of TC pairs only.

Procedure

  1. Create CCI configuration definition file A.

  2. In CCI, split pairs using CCI configuration definition file A created in step 1.

  3. In CCI, resume pair operation using CCI configuration definition file A without specifying a consistency group.

  4. In CCI, split pairs using CCI configuration definition file A.

  5. Create CCI configuration definition file C for a configuration of multiple pairs of storage systems.

  6. In CCI, register pairs to a consistency group, and resume pair operation using CCI configuration definition file C.

Next steps

After removing existing TC pairs, you can use the procedure to register pairs to a new consistency group when creating TC pairs.

Registering pairs to a new consistency group when using existing TC or TCz pairs

You can configure a consistency group of multiple primary and secondary storage systems when using existing TC or TCz pairs.

The consistency group of multiple primary and secondary storage systems consists of a combination of TC and TCz pairs.

Procedure

  1. Create CCI configuration definition file A.

  2. In CCI, split pairs using CCI configuration definition file A created in step 1.

  3. In CCI, resume pair operation using CCI configuration definition file A without specifying a consistency group.

  4. In CCI, split pairs using CCI configuration definition file A.

  5. Create CCI configuration definition file C for a configuration of multiple pairs of storage systems.

  6. In CCI, register pairs to a consistency group, and resume pair operation using CCI configuration definition file C.

Next steps

After removing existing TC or TCz pairs, you can use the procedure to register pairs to a new consistency group when creating TC or TCz pairs.

Registering pairs to an existing consistency group when creating a new TC pair

You can register TC pairs in a consistency group of multiple primary and secondary storage systems to an existing consistency group when you create a new TC pair.

The consistency group of multiple primary and secondary storage systems consist of TC pairs.

Procedure

  1. Add information of a TC pair you want to add to CCI configuration definition file B to create CCI configuration definition file C.

  2. In CCI, create a TC pair using CCI configuration definition file C.

Registering pairs to an existing consistency group when creating a new TC or TCz pair

You can register TC or TCz pairs in a consistency group of multiple primary and secondary storage systems to an existing consistency group when you create a new TC or TCz pair.

The consistency group of multiple primary and secondary storage systems consist of a combination of TC and TCz pairs.

Procedure

  1. Add information of a TC or TCz pair you want to add to CCI configuration definition file B to create CCI configuration definition file C.

  2. In CCI, create a TC or TCz pair using CCI configuration definition file C.

Registering pairs to an existing consistency group when using existing TC pairs

You can register TC pairs in a consistency group of multiple primary and secondary storage systems to an existing consistency group when using existing TC pairs.

The consistency group of multiple primary and secondary storage systems consist of TC pairs.

Procedure

  1. Create CCI configuration definition file A.

  2. In CCI, split pairs using CCI configuration definition file A.

  3. In CCI, resume pair operation using CCI configuration definition file A without specifying a consistency group.

  4. In CCI, split pairs using CCI configuration definition file A.

  5. Use CCI configuration definition file B to split pairs in the existing configuration of multiple primary and secondary storage systems.

  6. Add information of the TC pair you want to add to CCI configuration definition file B for the existing configuration of multiple primary and secondary storage systems to create CCI configuration definition file C.

  7. In CCI, create a TC pair using CCI configuration definition file C.

Next steps

After deleting existing TC pairs, you can use the procedure to register pairs to an existing consistency group when creating TC pairs.

Registering pairs to an existing consistency group when using existing TC or TCz pairs

You can register TC or TCz pairs in a consistency group of multiple primary and secondary storage systems to an existing consistency group when using existing TC or TCz pairs.

The consistency group of multiple primary and secondary storage systems consists of a combination of TC and TCz pairs.

Procedure

  1. Create CCI configuration definition file A.

  2. In CCI, split pairs using CCI configuration definition file A.

  3. In CCI, resume pair operation using CCI configuration definition file A without specifying a consistency group.

  4. In CCI, split pairs using CCI configuration definition file A.

  5. Use CCI configuration definition file B to split pairs in the existing configuration of multiple primary and secondary storage systems.

  6. Add information of the TC or TCz pair you want to add to CCI configuration definition file B for the existing configuration of multiple primary and secondary storage systems to create CCI configuration definition file C.

  7. In CCI, create a TC or TCz pair using CCI configuration definition file C.

Next steps

After deleting existing TC or TCz pairs, you can use the procedure to register pairs to an existing consistency group when creating TC or TCz pairs.

Consistency group requirements

Requirements are provided for the following consistency group (CTG) configurations.

Requirements for a CTG for one primary and one secondary system

  • A pair can be assigned to one consistency group only.
  • (VSP 5000 series) A maximum of 256 (00 to FF) consistency groups can be created. A maximum of 8,192 pairs can be registered to a consistency group.
  • (Virtual Storage Platform G/F350, G/F370, G/F700, G/F900, VSP E series) For the maximum number of consistency groups and the maximum number of TC pairs that you can create, see System requirements and specifications and Maximum number of pairs supported.
  • (VSP 5000 series) TC pairs and TCz pairs can be contained in a consistency group.
    NoteIf a primary and secondary storage systems are not the same model, the maximum number of pairs in a consistency group and the maximum number of consistency groups is the smaller number of the two.
  • Assign a consistency group ID in a range from 00-FF. The ID must be unused.
  • When using a volume in a virtual storage machine, if you want to create a consistency group of one primary and one secondary storage system, use volumes in the same virtual storage machine to create a pair. If you register a pair created using different virtual storage machine volumes to a consistency group, the consistency group is regarded as a consistency group of multiple primary and secondary storage systems.
  • When assigning pair volumes that belong to a virtual storage machine to a consistency group consisting of one primary and one secondary system, all the P-VOLs in the consistency group must belong to the same virtual storage machine.
  • If you use Command Control Interface to resynchronize a TCz pair in an open/mainframe consistency group with one primary system and secondary system, all pairs in the consistency group are resynchronized. A TC pair is also resynchronized with the others, even if its TC S-VOL is being accessed by a host. Make sure to check the status of all pairs in the consistency group before resynchronizing.
  • If you use Command Control Interface to delete a TCz pair in an open/mainframe consistency group with one primary system and secondary system, only the TCz pairs are deleted. Use CCI to delete the TC pairs.
  • (VSP 5000 series)To set or use TrueCopy Synchronous pairs with TC open/MF consistency groups specified, you must install TrueCopy Synchronous. In addition, TrueCopy consistency groups and open/MF consistency groups described in the Hitachi TrueCopy® for Mainframe User Guide are the same. For details about TrueCopy consistency groups, see Consistency group planning.

Requirements for a CTG for multiple primary and secondary systems

  • All requirements for a consistency group between one primary and one secondary system apply to a consistency group between multiple primary and secondary systems.
  • For VSP 5100 and VSP 5500, the primary and secondary systems must be VSP 5000 series, VSP G1x00, VSP F1500, or VSP, or VSP G/F350, G/F370, G/F700, G/F900, VSP E series. No other models can be used.
  • For VSP 5200 and VSP 5600, the primary and secondary systems must be VSP 5000 series. No other models can be used.
  • A consistency group can consist of a maximum of four primary and four secondary (paired) systems.
  • The microcode or firmware for both primary and secondary systems must support consistency groups between multiple primary and secondary systems. If it does not, pair creation results in failure.
    • If a storage system at the primary site does not support the consistency group functionality for multiple primary and secondary storage systems, a pair for a consistency group of one primary and one secondary storage system is created.
    • If a storage system at the secondary site does not support the consistency group functionality for multiple primary and secondary storage systems, no pairs can be created.
  • You must install the CCI version that supports a consistency group containing multiple primary systems and secondary systems.
  • Pair operations can be performed only from CCI. Pair operations from Device Manager - Storage Navigator are not supported.
  • Cascade configurations with Universal Replicator pairs are not supported.
  • (VSP 5000 series) Compatible FlashCopy® configurations are not supported.
  • You can assign pair volumes that belong to multiple virtual storage machines to a consistency group consisting of multiple primary and secondary systems.

Assigning pairs to a consistency group

The procedure to assign pairs depends on the number of storage systems in the consistency group.

Assigning pairs belonging to one primary system and secondary system

The method for assigning pairs to a consistency group differs according to the management software used to create the pairs:

  • When using Device Manager - Storage Navigator, only consistency group 127 is supported.
  • When using CCI, see the Command Control Interface User and Reference Guide
  • (VSP 5000 series) When using Business Continuity Manager, see the Business Continuity Manager User Guide.

Assigning pairs belonging to multiple primary and secondary systems

Assigning pairs in multiple primary and secondary systems to a consistency group depends on whether you are assigning to a new consistency group or an existing consistency group.

You can use CCI when creating and assigning pairs to a consistency group on multiple storage systems. Business Continuity Manager is not supported for this configuration.

Assigning TC and TCz pairs to the same consistency group

TrueCopy pairs can be in the same consistency group as TrueCopy for Mainframe pairs. Use the same consistency group ID for both types. Determine the consistency group ID by using CCI or Business Continuity Manager in advance. Use an unused consistency group ID.

Before defining pairs in CCI, specify the consistency group ID. In Business Continuity Manager, use the Copy Group Attributes (TC) window to set the consistency group ID, and then define pairs. For details about the Copy Group Attributes (TC) window, see the Business Continuity Manager User Guide.

When performing a split operation for each group through CCI or Business Continuity Manager, TrueCopy pairs and TrueCopy for Mainframe pairs assigned in the same consistency group are split, and then the data of TrueCopy pairs and TrueCopy for Mainframe pairs is guaranteed until the time that the split operations are accepted. At that time, the YKFREEZE and YKRUN are not required.

When write I/Os are received in the P-VOL of the TrueCopy pairs or the TrueCopy for Mainframe pairs in the target consistency group during the processing of step 2 through step 5, if the volume pair in which I/Os are received is not split, split the pair, and then perform the write I/O operations.

The data consistency is ensured by differentiating data instead of copying data in the S-VOL because the pairs have been split during the write I/O operations.

The following figure shows the pair split processing for each group when TrueCopy pairs #1 and #2, and TrueCopy pairs #3 and #4 are assigned to the same consistency group.

GUID-3E342C3C-0C8B-4325-A5BE-25974B939413-low.png

The following is the procedure for pair split processing for each group when TrueCopy and TrueCopy for Mainframe pairs are assigned in the same consistency group.

Procedure

  1. Accept a split operation for each group through CCI or Business Continuity Manager.

  2. Start the split operation for each group.

  3. Report the completion of split operation to the requester of the split operation.

  4. Split all TrueCopy pairs and TrueCopy for Mainframe pairs belonging to the target consistency group asynchronously.

  5. Complete the split operations of all pairs belonging to the target consistency group.

Using a new CTG

You can assign new pairs or existing pairs to a new consistency group.

To assign new pairs to a new consistency group
  1. Create CCI configuration definition file C for a multiple primary and secondary system configuration.
  2. Perform the paircreate operation according to configuration definition file C created in Step 1.
To assign existing pairs to a new consistency group
  1. Create CCI configuration definition file A with which to use CCI for pair operations.
  2. Perform the pairsplit operation according to configuration definition file A created in Step 1.
  3. Perform the pairresync operation without designating a consistency group. Do this using configuration definition file A.
  4. Perform the pairsplit operation again using configuration definition file A.
  5. Create CCI configuration definition file C for the multiple primary and secondary system configuration.
  6. Perform the pairresync operation and register them to configuration definition file C.
TipAfter deleting existing pairs, you can perform steps to assign new pairs to a new consistency group.

Using an existing CTG

You can assign new pairs or existing pairs to an existing consistency group.

To assign new pairs to an existing consistency group
  1. Add pair information to the existing configuration definition file B, which consists of pairs in multiple storage systems.
  2. Copy and create CCI configuration definition file C.
  3. Perform the paircreate operation and register them to configuration definition file C.
To assign existing pairs to an existing consistency group
  1. Create CCI configuration definition file A to use with CCI for pair operations.
  2. Perform the pairsplit operation on pairs that you want to register in the existing CTG with multiple systems. Do this using configuration definition file A.
  3. Perform the pairresync operation without designating a consistency group. Do this using configuration definition file A.
  4. Perform the pairsplit operation again using configuration definition file A.
  5. Perform the pairsplit operation to the existing configuration definition file B, which consists of the pairs in the multiple primary and secondary system configuration.
  6. Add pair information to existing configuration definition file B.
  7. Delete then re-create the pairs, registering them in configuration definition file C.
TipAfter deleting existing pairs, you can perform steps to assign new pairs to an existing consistency group.

Split behaviors for pairs in a CTG

When the pairs in a consistency group receive updates while in the process of being split or suspended, or when they are about to be split or suspended, S-VOL data consistency is managed as follows:

  • If I/O processing is in progress on pairs in the same consistency group, and the split or suspend operation begins, the I/O processing completes first, and then the split/suspend operation is performed.

    The following figure shows that I/O processing is completed first, and then the pair split operation for the pair on Volume B is completed.

GUID-227DFEA1-CB99-4C08-98D2-AD404DC02817-low.png

The following figure shows data in track 2 being copied to the S-VOL, and the data in track 3 becomes differential data. In this case, track 2 is used for I/O processing to the volume in the consistency group when the split command is issued to the pair.

GUID-022050BE-131E-4B5F-886B-CB2CAC8F76A7-low.gif
  • If a split operation is in progress when I/O processing on the pairs begins, the split operation on the pairs is given priority. After the pair is split, the I/O processing begins.
  • Data consistency cannot be ensured when all of the following conditions exist:

    - A port is blocked.

    - A split command is in progress.

    - I/O processing begins.

    In such a case, resynchronize the consistency group, and then issue the split command again.

Host access after split

When splitting the pair using the pairsplit command, you can specify settings for read/write access control for the P-VOL and S-VOL in consistency groups after pair split.

These settings are specified using CCI or Business Continuity Manager.

  • The CCI settings for TC are optional.
  • (VSP 5000 series) The Business Continuity Manager settings for TCz are required.

The following tables show the effects of the settings on read and write access.

Interface

Setting

TC P-VOL

TCz P-VOL

Read

Write

Read

Write

CCI (pairsplit command)

Write access prohibited (-p option)

Y

N

Y

N

No option selected

Y

Y

Y

Y

(VSP 5000 series) Business Continuity Manager

Write access prohibited

Y

N

Y

N

Write access permitted

Y

Y

Y

Y

Interface

Setting

TC S-VOL

TCz S-VOL

Read

Write

Read

Write

CCI (pairsplit command)

Read access permitted (-r option)

Y

N

N

N

Read/Write access permitted (-rw option)

Y

Y

Y

Y

No option selected

Y

N

N

N

(VSP 5000 series) Business Continuity Manager

Write access prohibited

Y

N

N

N

Write access permitted

Y

Y

Y

Y

Pair status before and after a split operation (VSP 5000 series)

Pairs in the same consistency group must be in PAIR/Duplex status when you begin the split operation in order to maintain consistency. Otherwise, when the operation completes, pair status will be inconsistent.

This is shown in the following table, in which font angle indicates the following:

  • Italics font: Pair status before the split operation on the consistency group
  • Regular font: Status after the split operation

For CCI

Pair statuses

TCz pairs

All = Duplex

Some = Duplex, some = Suspend

All = Suspend

TC pairs

All = PAIR

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

Some = PAIR, some = PSUS

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

All = PSUS

TC: PSUS

TCz: Duplex

TC: PSUS

TCz: Duplex/Suspend

TC: PSUS

TCz: Suspend

For BCM

Pair statuses

TCz pairs

All = Duplex

Some = Duplex, some = Suspend

All = Suspend

TC pairs

All = PAIR

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

Some = PAIR, some = PSUS

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

All = PSUS

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

TC: PSUS

TCz: Suspend

Consistency group 127

When you create pairs using Device Manager - Storage Navigator, they can be assigned to only one consistency group, 127. (You can also use CCI to assign pairs to CTG 127.) With CTG 127, you can ensure the following:

  • When a pair is split or suspended for any reason, you can ensure that all P-VOLs in the group become suspended.
  • If data paths between the secondary and primary system fail, you can ensure that all S-VOLs are placed in PSUE status.

For more information, see CTG 127 behavior and restrictions when a pair is suspended.

Procedure

  1. Turn Function Switch 30 On.

    • Turn on the switch in the primary and secondary systems to get the desired result in each system.
    • Turn on the switch in the system where you want the behavior: either consistent P-VOL suspensions for the primary system, or consistent S-VOL PSUE status for the secondary system.
  2. Create the pairs and assign them to CTG 127.

    • In CCI, assign the pairs to this group number when you create the pairs.
    • In Device Manager - Storage Navigator, pairs are automatically assigned to CTG 127 when the pairs are created and function switch 30 is On.

CTG 127 behavior and restrictions when a pair is suspended

Note the following behaviors and restrictions regarding the consistent suspending of all P-VOLs when a pair suspends.

  • When a failure occurs or if a pair is suspended by CCI, all P-VOLs will be suspended.
  • When P-VOLs and S-VOLs are registered in CTG 127, and both volumes are paired bidirectionally, all of the target pair volumes are registered in CTG 127 when takeover takes place.
  • The maximum number of pairs in CTG 127 is 4,096.
  • For P-VOLs to be suspended, a failure must occur, and then a write I/O operation must occur in any of the pairs.
  • When P-VOL status is PAIR and S-VOL status is PSUE, if a write I/O is executed, all P-VOLs registered in CTG 127 are suspended by failure.

    When P-VOL status is PAIR and S-VOL status is PSUE, you can restore PAIR status to the S-VOL by suspending the P-VOL and then resynchronizing the pair. With CCI, use the -l option.

  • When the S-VOL is suspended due to an intermittent communication failure, the P-VOL might not be suspended (P-VOL with no I/O processing stays in PAIR).

CTG 127 behavior and restrictions when the data path fails

Note the following behaviors and restrictions regarding the consistent changing of all S-VOLs to PSUE status when the secondary system is disconnected.

  • S-VOLs must be in PAIR or COPY status in order to change to PSUE status. They cannot be in PSUS or SSUS status.
  • All connections to the primary system must be disconnected.

    S-VOLs cannot be changed to PSUE status if the MinimumPath field is set to a value other than 1 on the primary system (RCU Option dialog box).

  • Changing status to PSUE might take up to 10-minutes if there are many pairs.
  • All S-VOLs will be changed to PSUE even if all data paths are recovered in the middle of the process.
  • If the data paths are disconnected for a short time (less than one minute), S-VOLs might not change to PSUE status because the storage system does not detect the disconnection.
  • After a power outage, all S-VOLs registered in CTG 127 will be changed to PSUE status.
  • If write I/O is executed when the P-VOL is in PAIR status and the S-VOL is in PSUE status, the secondary system does not accept updates, and the primary system suspends the P-VOL.
  • Remote I/O (RIO), which is issued during the change to PSUE status, is accepted by the secondary system.
  • When the status of a pair is changing to PSUE:

    - It cannot be resynchronized.

    - It cannot be created and registered in CTG 127.

    However, a pair can be deleted when status is changing to PSUE.

  • In a bidirectional configuration, if all data paths for the primary system of the reverse direction pair are disconnected when pair status is changing to PSUE, the disconnection might not be detected.
  • If all the data paths for TrueCopy pairs are disconnected, but the paths used for UR pairs are connected, failure suspend does not occur and S-VOLs cannot be changed to PSUE status.
  • If you turn off the power of the primary system when S-VOLs are in PAIR status, all the data paths for the primary system will be disconnected and all the S-VOLs registered in CTG 127 will be changed to PSUE status.

Resynchronizing and removing pairs using Business Continuity Manager (VSP 5000 series)

When you resynchronize the TrueCopy for Mainframe pair in the Open/MF consistency group which consists of one pair of the primary storage system and the secondary storage system by Business Continuity Manager, all pairs in the consistency group are resynchronized. If the host is accessing the S-VOL of the TrueCopy pair, the TrueCopy pair is also resynchronized simultaneously. Resynchronize the pair after you reconfirm the status of all TrueCopy pairs and all TrueCopy for Mainframe pairs in the consistency group.

When you remove the TrueCopy for Mainframe pair in the Open/MF consistency group which consists of one pair of the primary storage system and the secondary storage system by Business Continuity Manager, only the TrueCopy for Mainframe pairs in the consistency group are removed. If you want to remove the TrueCopy pair simultaneously, you must remove the pairs by CCI.

Host failover software

Host failover software transfers information between host servers at the primary and secondary sites. It is a critical component of a disaster recovery solution.

  • When TrueCopy is used as a disaster recovery tool, host failover is required to ensure effective recovery operations.
  • When TrueCopy is used as a data migration tool, host failover is recommended.

TrueCopy does not provide host failover functions. Use the failover software most suitable for your platform and requirements (for example, Microsoft Cluster Server).