Skip to main content
Hitachi Vantara Knowledge

Planning for Universal Replicator

Planning the Universal Replicator system is tied to your business requirements and production system workload. You must define your business requirements for disaster downtime and measure the amount of changed data your storage system produces over time. Using this information, you can calculate the size of journal volumes and the amount of bandwidth required to handle the transfer of data over the data path network.

Planning and design

Use the information you develop during your planning and design activities to work with your Hitachi Vantara account team to determine your UR implementation plan.

Plan and design activities
  • Assess your organization’s business requirements to determine the recovery requirements.
  • Measure the write workload (MB/sec and IOPS) of your host applications to begin matching actual data loads with the planned UR system.
  • Use the collected data along with your organization’s recovery point objective (RPO) to size UR journal volumes. Journal volumes must have enough capacity to hold accumulating data over extended periods.

    The sizing of journal volumes is influenced by the amount of bandwidth. These factors are interrelated. You can adjust journal volume size in conjunction with bandwidth to fit your organization’s needs.

  • Use IOPS to determine data transfer speed into and out of the journal volumes. Data transfer speed is determined by the number of Fibre Channel or iSCSI ports you assign to UR, and by RAID group configuration. You need to know port transfer capacity and the number of ports that your workload data will require.
  • Use collected workload data to size bandwidth for the Fibre Channel data path. As mentioned, bandwidth and journal volume sizing, along with data transfer speed, are interrelated. Bandwidth can be adjusted with the journal volume capacity and data transfer speed you plan to implement.
  • Design the data path network configuration, based on supported configurations, Fibre Channel switches, and the number of ports required for data transfer.
  • Plan data volumes (primary and secondary volumes) based on the sizing of P-VOLs and S-VOLs, RAID group configurations, and other considerations.
  • Review host OS requirements for data and journal volumes.
  • Adjust cache memory capacity for UR.

Assessing business requirements for data recovery

In a UR system, the journals remain fairly empty when the data path is able to transfer the updated data to the secondary site. However, if a path failure occurs, or if the amount of write-data exceeds bandwidth for an extended period of time, data flow can stop. Updated data that cannot be transferred to the secondary storage system accumulates in the master journal.

Use the following information to size the journals so they can hold the amount of data that can accumulate:

  • The amount of changed data that your application generates. Measure the write-workload to gather this information.
  • The maximum amount of time that journals can accumulate updated data. This information depends on your operation’s recovery point objective (RPO).

Determining your RPO

Your operation’s RPO is the maximum time that can pass after a failure or disaster occurs before data loss is greater than the operation can tolerate.

For example, if a disaster occurs at 10:00 AM and the operation can tolerate a loss of up to one hour of data, then the system must be corrected by 11:00 AM.

For proper journal sizing, the journal must have the capacity to hold the maximum amount of data that can accumulate in one hour. If the RPO is 4 hours, then the journal must be sized to hold 4 hours of update data.

To assess RPO, you must know the host application’s write-workload.

By measuring write workload and IOPS, you can analyze the number of transactions the write workload represents, determine the number of transactions the operation could lose and still remain viable, determine the amount of time required to recover lost data from log files or re-enter lost data, and so on. The result is your RPO.

Write-workload

Write-workload is the amount of data that changes in your production system in MB per second. As you will see, write-workload varies according to the time of day, week, month, quarter. That is why workload is measured over an extended period.

With the measurement data, you can calculate workload averages, locate peak workload, and calculate peak rolling averages, which show an elevated average. Use this data to calculate the amount of data that accumulates over your RPO time, for example, 2 hours. This is a base capacity for your journal volumes or represents a base amount of bandwidth that your system requires.

Whether you select average, rolling average, or peak, workload is based on the amount of bandwidth you provide the data path (which is also determined by write-workload). Bandwidth and journal volume capacity work together and depend on your strategy for protecting data.

Measuring write-workload

Workload data is collected using Hitachi Performance Monitor or your operating system’s performance-monitoring feature. You will use IOPS to set up a proper data transfer speed, which you ensure through RAID group configuration and by establishing the number of Fibre Channel or iSCSI ports your UR system requires. Each RAID group has a maximum transaction throughput; the ports and their microprocessors have an IOPS threshold.

Workload and IOPS collection is best performed during the busiest time of month, quarter, and year. This helps you to collect data that shows your system’s actual workloads during high peaks and spikes, when more data is changing, and when the demands on the system are greatest. Collecting data over these periods ensures that the UR design you develop will support your system in all workload levels.

Data transfer speed considerations

The ability of your UR system to transfer data in a timely manner depends on the following two factors:

  • RAID group configuration
  • Fibre Channel or iSCSI port configuration

You must plan both of these elements to handle the amount of data and number of transactions your system will generate under extreme conditions.

RAID group configuration

A RAID group can consist of physical volumes with a different number of revolutions, physical volumes of different capacities, and physical volumes of different RAID configurations (for example, RAID-1 and RAID-5). The data transfer speed of RAID groups is affected by physical volumes and RAID configurations.

Fibre Channel or iSCSI port configuration

Your Fibre Channel or iSCSI ports have an IOPS threshold of which you should be aware so that you can configure an appropriate number of Fibre Channel or iSCSI ports.

You can use the performance monitoring information for the number of IOPS your production system generates to calculate the number of Fibre Channel or iSCSI ports the UR system requires.

Sizing journal volumes

Journals volumes should be sized to meet all possible data scenarios, based on your business requirements. If the amount of data exceeds capacity, performance problems and suspensions result.

Journal volumes cannot be registered if capacity is lower than 10 GB (VSP 5000 series) or 1.5 GB (VSP E series).

Only DP-VOLs can be registered in journals. Therefore, a Dynamic Provisioning pool must have 10 GB (VSP 5000 series) or 1.5 GB (VSP E series) for each journal as the capacity for journal volumes.

Procedure

  1. Follow the instructions for Measuring write-workload.

  2. Use your system’s peak write-workload and your organization’s RPO to calculate journal size. For example:

    RPO = 2 hours Write-workload = 30 MB/second Calculate write-workload for the RPO. In the example, write-workload over a two-hour period is calculated as follows: 30 MB/second × 60 seconds = 1,800 MB/minute 1,800 MB/minute × 60 minutes = 108,000 MB/hour 108,000 MB/hour × 2 hours = 216,000 MB Basic journal volume size = 216,000 MB (216 GB)

Results

Journal volume capacity and bandwidth size work together. Also, your strategy for protecting your data might allow you to adjust bandwidth or the size of your journal volumes. For details about sizing strategies, see Five sizing strategies .

Next steps

NoteJournal data stored in the master journal volume is not deleted until the data is restored to the secondary volume. Therefore, if the restore journal volume is larger than the master journal volume, the master journal volume first becomes full. If you are planning for disaster recovery, the secondary storage system must be large enough to handle the production workload, and therefore, must be the same size as master journals.

Planning journals

UR manages pair operations for data consistency through the use of journals. UR journals enable update sequence consistency to be maintained across a group of volumes.

Understanding the consistency requirements for an application (or group of applications) and their volumes will indicate how to structure journals.

For example, databases are typically implemented in two sections. The bulk of the data is resident in a central data store, while incoming transactions are written to logs that are subsequently applied to the data store.

If the log volume "gets ahead" of the data store, it is possible that transactions could be lost at recovery time. Therefore, to ensure a valid recovery image on a replication volume, it is important that both the data store and logs are I/O consistent by placing them in the same journal.

Use the following information about journal volumes and journals to plan your journals:

  • A journal consists of one or more journal volumes and associated data volumes.
  • A journal can have only P-VOLs/master journals, or S-VOLs/restore journals.
  • A journal cannot belong to more than one storage system (primary or secondary).
  • All the P-VOLs, or S-VOLs, in a journal must belong to the same storage system.
  • Data volumes in different virtual storage machines cannot be registered in the same journal.
  • Master and restore journal IDs that are paired can be different.

    If using a consistency group ID, the consistency group ID of the P-VOL and S-VOL must be the same.

  • Each pair relationship in a journal is called a mirror. Each pair is assigned a mirror ID. The maximum number of mirror IDs is 4 (0 to 3) per system.
  • When UR and URz are used in the same system, individual journals must be dedicated either to one or the other, not both.
  • Master and restore journals are managed according to the journal ID.
  • Review journal specifications in System requirements.
  • A journal can contain a maximum of 2 journal volumes.

Planning journal volumes

In addition to sizing journal volumes, you must also consider the following requirements and restrictions:

  • Only DP-VOLs whose emulation type is OPEN-V can be used for journal volumes.

    Exceptions are the DP-VOL with Data Direct Mapping attribute enabled or capacity saving enabled, and the deduplication system data volume. They cannot be used as journal volumes.

  • Volumes in a virtual storage machine cannot be used as journal volume.
  • A journal ID can be used in one virtual storage machine only.
  • Volumes to which an LU path or a namespace for NVMe is set from a host cannot be registered as journal volumes.

    An LU path or a namespace for NVMe cannot be set on journal volumes.

  • Journal volumes must be registered in a journal before the initial copy operation is performed.
  • Journal volumes must be registered on both the primary and secondary storage systems.
  • You can register two journal volumes in a journal in the primary storage system and in the secondary storage system, but we recommend using one journal volume in each system. The second journal volume becomes the reserve journal volume and is not used for normal operations.
  • Journal volumes should be sized according to RPO and write-workload. For details, see Sizing journal volumes .
  • Journal volume capacity:
    • Journal volumes in a journal can have different capacities.
    • A master journal volume and the corresponding restore journal volume can have different capacities.
    • The displayed journal volume capacity is the master journal capacity and restore journal capacity. The reserve journal volume is not included in the displayed journal volume capacity.
    • Journal volume capacity is not included in accounting capacity.
    • In the GUI documents the journal volume capacity is called the journal capacity.
    • In the CCI documents the journal volume capacity is called the "capacity for the journal data on the journal volume" and "capacity of the data block size of the journal volume".
    • See the Provisioning Guide for information about adding capacity to a journal volume.
  • The number of journal volumes in the master journal does not have to be equal to the number of volumes in the restore journal.
  • A data volume and its associated journal volume can belong to only one journal.
  • ( VSP E series) Data volumes and journal volumes in the same journal must belong to the same controller.
  • (VSP E series) Do not register a volume to a journal during quick formatting. Doing so stalls the operation.
  • Journal volumes consist of two areas: One area stores journal data, and the other area stores metadata for remote copy.
  • Expanding the capacity of a journal volume while remote copy is in progress causes only the journal data area to be used, but not the metadata area for the expanded capacity of the journal volume, because it is unavailable. To make the metadata area available in this case, split and resynchronize all pairs in the journal group.
  • If you extend the journal volume when the journal volume size exceeds 36 GB, you need to restore the journal that is used for the extension to the S-VOL to use the extended capacity. However, it might take some time until the extended capacity becomes ready for use.

Planning journal volumes for delta resync

For the 3DC multi-target configuration using delta resync, use the following formula to determine the journal volume capacity in the Universal Replicator primary site (TrueCopy secondary site).

Perform the following calculations A and B, and use the larger result:

A. journal-volume-capacity > (VH-L - VL-R) × t

where:

  • VH-L: data transfer speed between the host and the primary system
  • VL-R: data transfer speed between the primary system and the secondary system

t: the time length of the data transfer peak work load duration

B. journal-volume-capacity > (VH-L × t) × 1.5

where:

  • VH-L: data transfer speed between the host and the primary system
  • t: the time it takes until the delta resync operation is performed

In formula B, 1.5 is used because, when updating the UR delta resync P-VOL, delta resync fails if the data capacity of the journal volume at the UR delta resync primary site (TC secondary site) exceeds 70%.

Planning pair volumes

The following information can help you prepare volumes for configuration. For more information, see system requirements and specifications in Requirements and specifications.

  • Each P-VOL requires one S-VOL only, and each S-VOL requires one P-VOL only.
  • The emulation and capacity for the S-VOL must be the same as for the P-VOL
  • When the S-VOL is connected to the same host as the P-VOL, the S-VOL must be defined to remain offline.
  • The LUN paths must be defined for both P-VOL and S-VOL, or both P-VOL and S-VOL must be defined as namespaces on the NVM subsystems to which NVM subsystem ports have been added.
  • When creating multiple pairs in the same operation using Device Manager - Storage Navigator, make sure that you set up S-VOL LUNs in a way that allows the system to correctly match them to selected P-VOLs.

    Even though you select multiple volumes as P-VOLs in the Device Manager - Storage Navigator Create UR Pairs procedure, you are able to specify only one S-VOL. The system automatically assigns LUs on the secondary storage system as S-VOLs for the other selected P-VOLs according to LUN.

    You will have two options for specifying how the system matches S-VOLs to P-VOLs.

    - Interval: The interval you specify will be skipped between LU numbers in the secondary storage system.

    For example, suppose you specify LU 01 as the initial (base) S-VOL, and specify 3 for Interval. This results in secondary storage system LU 04 being assigned to the next P-VOL, 07 assigned to the subsequent P-VOL, and so on. To use Interval, you set up secondary storage system LU numbers according to the interval between them.

    - Relative Primary Volume. The difference is calculated between the LDEV numbers of two successive P-VOLs. S-VOLs are assigned according to the closest LUN number.

    For example, if the LUN numbers of three P-VOLs are 1, 5, and 6; and you set LUN numbers for the initial S-VOL (Base Secondary Volume) at 2, the LUN numbers of the three S-VOLs will be set at 2, 6, and 7, respectively.

  • You can create a UR pair using a TrueCopy initial copy, which takes less time. To do this, system option 474 must be set on the primary and secondary storage systems. Also, a script is required to perform this operation. For more on system option 474 and how to do this operation, contact customer support.
  • UR supports the Virtual LUN feature, which allows you to configure custom LUs that are smaller than standard LUs. When custom LUs are assigned to a UR pair, the S-VOL must have the same capacity as the P-VOL. For details about Virtual LUN feature, see the Provisioning Guide for your storage system.
  • Identify the volumes that will become the P-VOLs and S-VOLs. Note the port, host group ID, iSCSI target ID, and LUN ID of each volume. This information is used during the initial copy operation. When you use a volume connected to a host using NVMe-oF for a pair volume, specify the pair volume as a dummy LU. For details, see the relevant topic in the Command Control Interface User and Reference Guide.
  • You can create multiple pairs at the same time. Review the prerequisites and steps in Creating a UR pair .
  • When you create a UR pair, you will have the option to create only the relationship, without copying data from P-VOL to S-VOL. You can use this option only when data in the two volumes is identical.

Maximum number of pairs allowed

The maximum number of pairs might be smaller than the number listed in System requirements because the amount of used bitmap area differs depending on the user environment (volume size). The maximum number for your storage system is limited by:

  • The number of cylinders in the volumes, which must be calculated.
  • The number of bitmap areas required for Universal Replicator data and journal volumes. This is calculated using the number of cylinders.

    If the volume size is larger than 4,194,304 MB (8,589,934,592 blocks), the bitmap area is not used. Therefore, it is not necessary to calculate the maximum number of pairs when creating UR pairs with DP-VOL whose size is larger than 4,194,304 MB (8,589,934,592 blocks).

    NoteWhen Advanced System Setting No. 5 is enabled, the bitmaps for all pairs created with DP-VOLs smaller than 262,668 cylinders (4 TB), are managed in hierarchical memory and not in shared memory when a pair is created or resynchronized. In this case, the bitmap area in shared memory is not used, so you do not need to calculate the maximum number of pairs when Advanced System Setting No. 5 is enabled.
    NoteWhen Advanced System Setting No. 6 is enabled, the bitmaps for all pairs created with DP-VOLs smaller than 262,668 cylinders, are managed in hierarchical memory and not in shared memory when a pair is created. In this case, the bitmap area in shared memory is not used, so you do not need to calculate the maximum number of pairs when Advanced System Setting No. 6 is enabled.
CautionThe bitmap areas that are used for UR are also used for URz, TC, TCz, and GAD. If you use UR with any of these products, use the total number of each pair’s bitmap areas to calculate the maximum number of pairs. In addition, if UR and TC share the same volume, use the total number of both pairs regardless of whether the shared volume is primary or secondary.

Calculating maximum number of pairs

The calculations in this topic use the following conventions:

  • ceil (<value>) indicates that the value enclosed in parentheses must be rounded up to the next integer, for example: ceil (2.2) = 3
  • Number of logical blocks indicates volume capacity measured in blocks.

    Number of logical blocks - Volume capacity (in bytes) / 512

Calculating the number of cylinders

Use the following formula:

Number of cylinders = (ceil ( (ceil (number of logical blocks / 512)) / 15))

Calculating the number of required bitmap areas

Use the following formula:

ceil((number of cylinders × 15) / 122,752) )

where:

  • number of cylinders × 15 indicates the number of slots
  • 122,752 is the number of slots that a bitmap area can manage

    Doing this calculation for multiple volumes can result in inaccuracies. Perform the calculation for each volume separately, and then total the bitmap areas. The following examples show correct and incorrect calculations. Two volumes are used: one of 10,017 cylinders and another of 32,760 cylinders.

    Correct calculation

    ceil ((10,017 × 15) / 122,752) = 2

    ceil ((32,760 × 15) / 122,752) = 5

    Total: 7

    Incorrect calculation

    10,017 + 32,760 = 42,777 cylinders

    ceil ((42,777 × 15) / 122,752) = 6

    Total: 6

Calculating the maximum number of pairs

The maximum number of pairs is determined by the following:

  • The number of bitmap areas required for Universal Replicator (previously calculated).
  • The total number of bitmap areas in VSP 5000 series: 65,536.
  • The total number of bitmap areas in VSP E series: 65,536.

    Bitmap areas reside in an additional shared memory, which is required for Universal Replicator.

    • Bitmap areas are used by TrueCopy, Universal Replicator, TrueCopy for Mainframe, Universal Replicator for Mainframe, and global-active device. Therefore, the number of bitmap areas used by these other program products (if any) must be subtracted from the total number of bitmap areas (65,536 for example), with the difference used to calculate the maximum number of pairs available for Universal Replicator.
    • If TrueCopy and Universal Replicator share the same volume, you must use the total number of bitmap areas for both pairs regardless of whether the shared volume is primary or secondary.
  • If you are using a CCI command device, the maximum number of pairs supported is one less than the maximum supported by the storage system.

Calculate the maximum number of pairs using the following formula.

Maximum number of pairs = floor( Number of bitmap areas / required number of bitmap areas )

For VSP 5000 series, if the calculated maximum number of pairs exceeds the total number of LDEVs, and the total LDEVs are less than 65,280, then the total LDEV number is the maximum number of pairs that can be created.

For VSP E series, if the calculated maximum number of pairs exceeds the maximum number for each model listed in System requirements, the maximum number for each model will be the maximum number.

Maximum initial copy operations and priorities

During configuration, you specify the maximum number of initial copies that can be run at one time. The system allows up to 128 initial copies to run concurrently for UR. You do this for performance reasons (the more initial copies running concurrently, the slower the performance).

You will also specify the priority for each initial copy during the create pair operation. Priority is used when you are creating multiple initial copies during an operation. Creating multiple initial copies in one operation is possible because you can specify multiple P-VOLs and S-VOLs in the Paircreate dialog box. The pair with priority 1 runs first, and so on.

When you create more pairs than the maximum initial copy setting, the pairs with priorities within the maximum number specified run concurrently, while the pairs with priorities higher than the maximum number wait. When one pair completes, a waiting pair begins, and so on.

If you perform a pair operation for multiple pairs (for a specific kind of data, for example), and then perform another operation for multiple pairs (for another kind of data, for example), the pairs in the first operation are completed in the order of their assigned priorities. The system begins processing pairs in the second set when the number of pairs left in the first set drops below the maximum number of initial copy setting. The following figure illustrates how the maximum number of initial copy setting works to control the impact of concurrent operations.

See the step for Priority in the procedure in Creating a UR pair .

Restrictions when creating an LU whose LU number is 2048 or greater

A pair can be created using LUs whose LU numbers are 2048 to 4095 if you connect VSP 5000 series, whose DKCMAIN program version is 90-02-0x-xx/xx or later, as the source storage system.

Do not try to create a pair using LUs whose LU numbers are 2048 to 4095 unless the storage system to which you are connecting is also VSP 5000 series, whose DKCMAIN program version is 90-02-0x-xx/xx or later. Failures, such as Pair Suspend, might occur if you try to create a pair using LUs whose LU numbers are 2048 or greater and the storage system to which you are connecting is one of the following:

  • A storage system other than a VSP 5000 series
  • A VSP 5000 series whose DKCMAIN program version is earlier than 90-02-0x-xx/xx.

For VSP 5000 series whose DKCMAIN program version is 90-02-0x-xx/xx or later, up to 4096 LU paths are possible for a Fibre Channel port or iSCSI port. These restrictions are not applied to NVMe-oF ports.

  • If you set a host group for a Fibre Channel port, up to 4096 LU paths can be set for a host group. In addition, up to 4096 LU paths can be set for a port through the host group.
  • If you configure an iSCSI target for an iSCSI port, you can configure up to 4096 LU paths for an iSCSI target. In addition, up to 4096 LU paths can be set for a port through the iSCSI target.

For VSP E series, up to 2048 LU paths are possible. These restrictions are not applied to NVMe-oF ports.

  • If you set a host group for a Fibre Channel port, up to 2048 LU paths can be set for a host group. In addition, up to 2048 LU paths can be set for a port through the host group.
  • If you configure an iSCSI target for an iSCSI port, you can configure up to 2048 LU paths for an iSCSI target. In addition, up to 2048 LU paths can be set for a port through the iSCSI target.

The following table lists LU numbers that can be used when different target storage systems and DKCMAIN program versions are connected to VSP 5000 series.

Source storage system*Target storage systemRestrictions
Storage systemProgram version of DKCMAINLU number that can create a pairNumber of LU paths that can be set for a port
VSP 5100, VSP 5500VSP70-06-63-xx/xx or later0 to 20470 to 2048
VSP G1x00, VSP F150080-06-7x-xx/xx or later0 to 20470 to 2048
VSP 5100, VSP 5500Earlier than 90-02-0x-xx/xx0 to 20470 to 2048
90-02-0x-xx/xx or later0 to 40950 to 4096
VSP 5200, VSP 5600Any0 to 40950 to 4096
VSP G/F350, G/F370, G/F700, G/F90088-04-0x-xx/xx or later0 to 20470 to 2048
VSP E590, VSP E79093-03-22-xx/xx or later 0 to 20470 to 2048
VSP E99093-02-01-xx/xx or later 0 to 20470 to 2048
VSP E109093-06-22-x0/00 or later0 to 20470 to 2048
VSP 5200, VSP 5600VSP G1x00, VSP F150080-06-87-xx/xx or later0 to 20470 to 2048
VSP 5100, VSP 550090-08-01-xx/xx or later0 to 40950 to 4096
VSP 5200, VSP 5600Any0 to 40950 to 4096
VSP G/F350, G/F370, G/F700, G/F90088-08-04-xx/xx or later0 to 20470 to 2048
VSP E590, VSP E790, VSP E99093-05-03-xx/xx or later0 to 20470 to 2048
VSP E109093-06-22-x0/00 or later0 to 20470 to 2048
* For details about the supported DKCMAIN microcode versions for the source and target storage systems, see your system requirements.

Disaster recovery considerations

You begin a disaster recovery solution when planning the UR system. The following are the main tasks for preparing for disaster recovery:

  • Identify the data volumes that you want to back up for disaster recovery.
  • Pair the identified volumes using UR.
  • Establish file and database recovery procedures.
  • Install and configure host failover software error reporting communications (ERC) between the primary and secondary sites.

For more information about host failover error reporting, see Host failover software. Also, review UR disaster recovery operations to become familiar with disaster recovery processes.

Host failover software

Host failover software is a critical component of any disaster recovery effort. When a primary storage system fails to maintain synchronization of a UR pair, the primary storage system generates sense information. This information must be transferred to the secondary site using the host failover software for effective disaster recovery. CCI provides failover commands that interface with industry-standard failover products.

Cache and additional shared memory

Cache must be operable for the pair's primary and secondary system, otherwise pairs cannot be created. The secondary system cache must be configured to adequately support Universal Replicator remote copy workloads and any local workload activity.

(VSP E series) Perform the following calculations and use the smaller result to add the cache memory capacity for Universal Replicator. You can remove the cache memory or shared memory that is no longer necessary.

  • 1 GB × number-of-journals
  • 25% of the cache memory mounted on the storage system

The following workflows describe how to add and remove the cache memory or shared memory when it is used with UR pairs.

Adding and removing cache memory

Use the following workflow to add or remove cache memory in a storage system in which UR pairs already exist:

Procedure

  1. Identify the status of the UR volumes in the storage system.

  2. If a UR volume is in the COPY status, wait until the status changes to PAIR, or split the UR pair.

    Do not add or remove cache memory when any volumes are in the COPY status.
  3. When the status of all volumes has been confirmed, cache memory can be added to or removed from the storage system by your service representative. Contact customer support for adding or removing cache memory.

  4. After the addition or removal of cache memory is complete, resynchronize the pairs that you split in step 2.

Adding shared memory

Use the following workflow to add shared memory to a storage system in which UR pairs already exist:

Procedure

  1. Identify the status of the UR volumes in the storage system.

  2. If a UR volume is in the COPY status, wait until the status changes to PAIR, or split the UR pair.

    Do not add shared memory when any volumes are in the COPY status.
  3. When the status of all volumes has been confirmed, shared memory can be added to the storage system by your service representative. Contact customer support for adding shared memory.

  4. After the addition of shared memory is complete, resynchronize the pairs that you split in step 2.

Removing shared memory

You can remove shared memory if it is redundant.

Procedure

  1. Identify the status of all volumes in the storage system.

  2. If a volume is used by a UR pair, delete the UR pair.

  3. (VSP E series) If you used journals exceeding the maximum number, release all the registered journals.

    See the system requirements table for the maximum number of journals.
  4. Shared memory can be removed from the storage system by your service representative. Contact customer support for removing shared memory.

Sharing volumes with other product volumes

Universal Replicator volumes can be shared with other product volumes. Sharing pair volumes enhances replication solutions, for example, when Universal Replicator and TrueCopy or ShadowImage volumes are shared.

Planning UR in multiple storage systems using a consistency group

You can perform copy operations simultaneously on multiple UR pairs residing in multiple primary and multiple secondary storage systems by placing journals in the primary storage systems in a CCI consistency group. Data update order in copy processing is guaranteed to the secondary storage systems.

With multiple systems, the journals in the paired secondary storage systems are automatically placed in the consistency group.

With multiple systems, you can also place the journals from both open and mainframe systems in the same CCI consistency group.

In addition, Universal Replicator volumes in multiple systems can be shared with other UR pairs and with TrueCopy pairs. For details, see Configurations with TrueCopy.

The UR system can configure a maximum of four units primary storage system and a maximum of four units secondary storage system.

VSP 5100, VSP 5500 and VSP E series can connect to the following:

  • VSP G/F350, G/F370, G/F700, G/F900
  • VSP G200, G400, G600, G800, VSP F400, F600, F800
  • VSP G1x00, VSP F1500
  • VSP 5100, VSP 5500
  • HUS VM

VSP 5200, VSP 5600 can connect to the following:

  • VSP 5000 series
  • VSP E series

VSP E1090 can connect to the following:

  • VSP 5000 series
  • VSP E series

VSP E990 cannot connect to HUS VM,VSP G1x00, or VSP F1500.

Any combination of primary and secondary storage system can be used in the range of one to four. For example, you can include journals from four primary storage systems and four secondary storage systems, two primary storage systems and one secondary storage system, and so on.

The following figure shows a sample configuration, which is composed of two primary storage systems and two secondary storage systems.

GUID-4BDA15BE-4499-4298-8D4A-1920CFBE6E5F-low.png

When data is sent to the secondary storage systems, the systems check the time stamps, which are added when data is written by the hosts to the P-VOLs. The secondary storage systems then restore the data to the S-VOLs in chronological order to ensure that the update sequence is maintained.

Requirements and recommendations for multiple system CTGs

Note the following when planning for multiple-system consistency groups:

  • When using HDvM - SN, management clients are required at the primary and secondary sites.
  • CCI is recommended on the host at the primary and secondary sites.
  • Journal data is updated in the secondary storage system based on the time stamp and the sequence number issued by the host with write requests to the primary storage system. Time and sequence information remain with the data as it moves to the master and restore journals and then to the secondary volume.
  • With CCI consistency groups, when a pair is split from the S-VOL side (P-VOL status = PAIR), each storage system copies the latest data from the P-VOLs to the S-VOLs. P-VOL time stamps might differ by storage system, depending on when they were updated.
  • Disaster recovery can be performed with multiple storage systems, including those with UR and URz journals, using CCI. See Switching host operations to the secondary site for information.
  • An error in one journal can cause suspension of all journals. For details, see General troubleshooting.
  • The time stamps issued by the mainframe host are not used when the URz journal is included in a CCI consistency group.
  • If you created a URz pair in the configuration which multiple primary storage system and the secondary storage system are combined, the URz pair volume cannot be shared with a Compatible FlashCopy® volume.
  • Restoring data to the secondary storage system is performed when the time stamp of the copied journal is updated. The recommended interval between time stamps is one second.

    Consider the following before setting the interval:

    • I/O response time slows when time stamps are updating among multiple storage systems. If you shorten the interval, more time stamps are issued, resulting in an I/O response time that is even slower.
    • If the interval is lengthened, the amount of time that journal data can accumulate increases, which results in an increased amount of data to be copied.
    • None of the above is true during the initial copy or resynchronization. During these operations, lengthening the interval between time stamps does not result in more accumulated journal data, because data restoring takes place regardless of time stamp.
  • The recommended method for executing CCI commands is the in-band (host-based) method. This prevents I/O response from deteriorating, which can occur with the out-of-band (LAN-based) method.
  • In a configuration in which multiple storage systems in primary and secondary sites are combined, configure the remote copy environment of each storage system as equally as possible. If the following conditions exist, the restoration performance of each journal is degraded, and journal data is accumulated:
    • The copy performance between the primary and secondary sites of some pairs is lower than other storage systems.
    • A problem occurs in a line between pairs.
  • It is not possible to register a journal to multiple CCI consistency groups.

Registering multiple journals to a CCI consistency group

Basically, only one journal should be registered to a CCI consistency group (CTG). However, in the configurations shown in the following figures, a maximum of four storage systems are registered to a CCI CTG. When the program products used in the primary site and secondary site storage system are the same, you can register either journal of the UR system.

For example, you can configure the CCI CTG storage system of four units each for the primary site and secondary site. Or you can configure the CCI CTG multiple systems of two units storage system for the primary site and one unit storage system for the secondary site, too.

In the following figures, multiple journals are registered to a consistency group.

Configuration of a consistency group with multiple journals (1)
GUID-F40CBF8F-CE9F-498A-BDC0-714BBEE8F942-low.png
Configuration of a consistency group with multiple journals (2)
GUID-9581C8DC-6B1E-4EF0-A8A2-068B3E778636-low.png
Configuration of a consistency group with multiple journals (3)
GUID-971F36B7-4BB9-4771-85C8-85DA994BB988-low.png

Planning for other storage systems

You should be aware of differences between your storage system and other storage systems if you want to pair volumes between them.

  • You can perform remote copy operations when connecting VSP 5000 series or VSP E series to other storage systems. The supported models differ depending on the model and microcode version. For details, see System requirements.

    Data can be copied from VSP 5100 or VSP 5500 to and from the following storage systems:

    • VSP 5000 series
    • VSP G1x00, VSP F1500
    • VSP
    • VSP E series
    • VSP G130, VSP G/F350, G/F370, G/F700, G/F900

    Data can be copied from VSP 5200 or VSP 5600 to and from the following storage systems:

    • VSP 5000 series
    • VSP G1x00, VSP F1500
    • VSP E series

    For information about VSP 5000 series URxUR support, contact customer support.

    Data can be copied from VSP E series to the following storage systems:

    • VSP G/F350, G/F370, G/F700, G/F900
    • VSP G200, G400, G600, G800, VSP F400, F600, F800
    • VSP G1x00, VSP F1500 (Connection with VSP E series series is not supported.)
    • VSP 5100, VSP 5500
    • HUS VM (Connection with VSP E series series is not supported.)
    NoteWhen you set MCU for VSP 5000 series and set RCU for the following systems, the performance of an asynchronous copy might be degraded to approximately 30% through 50% of the performance with the VSP 5000 series connected to each other:
    • When MCU is VSP 5100 or VSP 5500:
      • VSP F1500 and VSP G1x00
      • VSP
      • VSP E series
      • VSP G130, VSP G/F350, G/F370, G/F700, G/F900
    • When MCU is VSP 5200 or VSP 5600:
      • VSP F1500 and VSP G1x00
      • VSP E series
  • A remote path must be connected between the current storage system and the other storage systems. For configuration instructions, see Configuring primary and secondary storage systems for UR .
  • When connecting to another storage system, the number of usable volumes varies depending on the current storage system model.
  • When connecting to another storage system, contact your Hitachi Vantara representative for information regarding supported microcode versions.
  • When using the previous model storage system at the secondary site, make sure the primary and secondary storage systems have unique serial numbers.
    NoteWhen you specify the VSP 5000 series serial number in CCI commands, add a “5” at the beginning of the serial number. For example, for serial number 12345, enter 512345.
  • VSP 5000 series, VSP G1x00, VSP F1500, VSP E series, VSP G/F900, and VSP can be used in 3-data-center (3DC) cascade or multi-target configurations using VSP 5100, VSP 5500. VSP 5000 series and VSP E series can be used in 3-data-center (3DC) cascade or multi-target configurations using VSP 5200, VSP 5600. These configurations are used when combining TrueCopy and Universal Replicator systems. See Configurations with TrueCopy to review these configurations.

    There are no restrictions for combining primary and secondary sites between VSP 5100, VSP 5500, VSP G1x00, VSP F1500, VSP E series, VSP G/F900, VSP in 3DC configurations.

    VSP can be used only when you set up the configuration using VSP 5100, VSP 5500, VSP G1x00, or VSP F1500. For information about VSP 5000 series URxUR support, contact customer support.

  • If you connect with VSP E series the CTG ID for the P-VOL and S-VOL in a pair must match:
    • Connecting to VSP E590, VSP E790: Set CTG ID between 0 and 127.
    • Connecting to VSP G/F350, VSP G/F370, VSP G/F700: Set CTG ID between 0 and 127.
    • Connecting to VSP E990, VSP E1090: Set CTG ID between 0 and 255.
    • Connecting to VSP G/F900: Set CTG ID between 0 and 255.
    NoteTo avoid operational error, set the CTG ID and the journal ID as the same ID.

    For a 3DC cascade or multi-target configuration combined with TrueCopy, all storage systems in the configuration must be VSP E series, VSP F900, VSP G900, VSP F800, VSP G800, VSP F1500, VSP G1x00, VSP 5000 series, HUS VM.

Preparing the storage systems for UR

Use the following guidelines to ensure that your storage systems are ready for UR:

  • Identify the locations where your UR primary and secondary data volumes will be located, and then install and configure the storage systems.
  • Make sure that primary and secondary storage systems are properly configured for UR operations, for example, cache memory considerations. See the entry for Cache and Nonvolatile Storage in the requirements table, System requirements. Also consider the amount of Cache Residency Manager data to be stored in cache when determining the required amount of cache.
  • Make sure that the required system option modes for your UR configuration have been set on the primary and secondary storage systems. For details, contact customer support.
  • Make sure that primary storage systems are configured to report sense information to the host. Secondary storage systems should also be attached to a host server to enable reporting of sense information in the event of a problem with an S-VOL or secondary storage system. If the secondary storage system is not attached to a host, it should be attached to a primary site host server so that monitoring can be performed.
  • If power sequence control cables are used, set the power select switch for the cluster to LOCAL to prevent the primary storage system from being powered off by the host. Make sure the secondary storage system is not powered off during UR operations.
  • Install the UR remote copy connections (Fibre Channel or iSCSI cables, switches, and so on) between the primary and secondary storage systems.
  • When setting up data paths, distribute them between different storage clusters and switches to provide maximum flexibility and availability. The remote paths between the primary and secondary storage systems must be separate from the remote paths between the host and secondary storage system.

Advanced system settings

Advanced system settings allow the storage systems to be configured to specific customer operating requirements. The advanced system settings can be used with Universal Replicator in the following conditions:

  • Delta Resync configuration with Universal Replicator and TrueCopy, or global-active device.
  • Configuring split options for mirrors.
  • To switch the control of differential bitmaps of volumes used in a Universal Replicator for Mainframe pair.
  • To expand the capacity of a DP-VOL used as a volume of a Universal Replicator pair.

The following table lists the advanced system settings. You can change the advanced system settings in the Edit Advanced System Settings window. For more information about changing the advanced system settings, see the System Administrator Guide.

NoteEnsure that MCU and RCU have the same advanced system settings.
Number Default Description
5OFFAdvanced System Setting No. 5: Switch the control of differential bitmaps of volumes used for TC/TCMF/UR/URMF/GAD pairs whose capacity is 4TB or less (for open volumes)/262,668Cyl or less (for MF volumes) at creation or resynchronization of pairs.

When enabled, for a TC, TCMF, UR, URMF, or GAD pair that uses an open volume (only DP-VOL) with user capacity of 4,194,304 MB or less, or a mainframe volume with user capacity of 262,668 Cyl or less, the differential data management for the target volume is enabled by the hierarchical difference at the new pair creation or the pair resynchronization (hierarchical difference management).

In addition, for a TC, TCMF, UR, URMF, or GAD pair that uses an open volume (only DP-VOL) with user capacity exceeding 4,194,304 MB, or a mainframe volume with user capacity exceeding 262,668 Cyl, the differential data management for the target volume is enabled by the hierarchical difference at the new pair creation regardless of this setting.

6OFFAdvanced System Setting No. 6: Switch the control of differential bitmaps of volumes used for TC/TCMF/UR/URMF/GAD pairs whose capacity is 4TB or less (for open volumes)/262,668Cyl or less (for MF volumes) at creation of pairs.

When enabled, for a TC, TCMF, UR, URMF, or GAD pair that uses an open volume (only DP-VOL) with user capacity of 4,194,304 MB or less, or a mainframe volume with user capacity of 262,668 Cyl or less, the differential data management for the target volume is enabled by the hierarchical difference at the new pair creation (hierarchical difference management).

In addition, for a TC, TCMF, UR, URMF, or GAD pair that uses an open volume (only DP-VOL) with user capacity exceeding 4,194,304 MB, or a mainframe volume with user capacity exceeding 262,668 Cyl, the differential data management for the target volume is enabled by the hierarchical difference at the new pair creation regardless of this setting.

How this setting works with setting No. 5 is described in the following table.

Setting No.Description
56
DisabledDisabled
  • Create operation:

    Apply the shared memory (SM) difference management at the new pair creation.

  • Resync operation:

    Change the management method of the existing pair from hierarchical differences to SM differences when the pair is resynchronized and the pair status changes to PAIR after No. 5 and No. 6 settings.

Enabled
  • Create operation:

    Apply the hierarchical difference management at the new pair creation.

  • Resync operation:

    The differential data management method of the existing pair is not changed.

EnabledDisabled
  • Create operation:

    Apply the hierarchical difference management at the new pair creation.

  • Resync operation:

    Change the management method of the existing pair from SM differences to hierarchical differences when the pair is resynchronized and the pair status changes to PAIR after No. 5 and No. 6 settings.

Note
  • If the user capacity of a volume used in a TC/TCMF, UR/URMF, or GAD pair exceeds 4,194,304 MB for an open volume (only DP-VOL) or 262,668 Cyl for a mainframe volume, the differential data management for the target volume is enabled by the hierarchical difference at the new pair creation regardless of the settings of the advanced system settings No. 5 and No. 6.
  • Make the same settings of the advanced system settings No. 5 and No. 6 for both the primary and secondary storage systems.
  • If the system option mode (SOM) 1198 or 1199 is applied, the difference management method with SOM 1198 or 1199 takes precedence. For more information, see the System Administrator Guide.
14 (VSP 5000 series)OFF After delta resync, the pair status remains COPY during journal data copy.
  • Enabled: When a delta resync is performed in a 3DC multi-target configuration with TC and UR, the pair status remains COPY during journal data copy.
  • Disabled: When a delta resync is performed in a 3DC multi-target configuration with TC and UR, the pair status changes directly to PAIR.

This setting corresponds to the system option mode 1015 for VSP G1x00, VSP F1500, or previous models.

15 (VSP 5000 series)OFF One minute after remote path failure detection, the mirror is split.
  • Enabled: When a remote path failure is detected, the mirror is split if the remote path is not restored within one minute after the detection.
  • Disabled: When a remote path failure is detected, the mirror is split if the remote path is not restored within the path monitoring time set by the mirror option.

This setting is enabled only when No. 16 is enabled. When No. 16 is disabled, the mirror is not split even if a remote path failure is detected. This item corresponds to the system option mode 448 for VSP G1x00 and VSP F1500 or previous models.

16 (VSP 5000 series)OFFAfter remote path failure detection, the mirror is split.
  • Enabled: After a remote path failure is detected, the mirror is split.
  • Disabled: Even if a remote path failure is detected, the mirror is not split.

This item corresponds to the system option mode 449 for VSP G1x00 and VSP F1500 or previous models. However, Enabled and Disabled have the opposite meanings to the system option mode 449.

How this setting works with setting No. 15 is described in the following table.

Setting No.Description
1516
DisabledDisabledEven if a remote path failure is detected, the mirror is not split.
EnabledDisabledEven if a remote path failure is detected, the mirror is not split.
DisabledEnabledAfter remote path failure detection, the mirror is split if the remote path is not restored within the path monitoring time.
EnabledEnabledAfter remote path failure detection, the mirror is split if the remote path is not restored within one minute after the detection.
17 (VSP 5000 series)OFFThe copy pace for mirror option (Medium) becomes one level faster.

When enabled, the pace for copying data during initial copy becomes one level faster when the copy pace for journal option is Medium. This item can be used to make the initial copy operation in Medium speed mode perform faster.

This setting corresponds to the system option mode 600 for VSP G1x00, VSP F1500, or previous models.

18 (VSP 5000 series)OFFThe copy pace for mirror option (Medium) becomes two level faster.

When enabled, the pace for copying data during initial copy becomes two levels faster when the copy pace for journal option is Medium. This item can be used to make the initial copy operation in Medium speed mode perform faster.

This setting corresponds to the system option mode 601 for VSP G1x00, VSP F1500, or previous models.