Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Hitachi Block Workflows

This section describes high level workflows for Hitachi Block based data protection. These workflows focus on basic data protection scenarios involving primary and secondary LDEVs located on block storage devices. For guidance on protecting supported application data located on a block storage device refer to the relevant Protector Application Guide listed in Related documents.

If you encounter problems, please refer to the following:

How to snapshot an Hitachi Block LDEV with Thin Image

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node that will act as a proxy for the Hitachi Block storage device. Note that for a Thin Image snapshot, the source and destination LDEVs are located on the same device.
  • The storage device has been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting an LDEV allocated from a Hitachi Block storage device. This is useful when Protector has no way of interacting with the application or OS that is using the LDEV. The snapshot will be crash consistent, because Protector is not able to orchestrate the snapshot operation in conjunction with applications using the LDEV. Thin Image hardware snapshots of the P-VOL are created as S-VOLs residing within the same storage device. For more information, refer to About Thin Image differential and refreshed snapshots. The data flow and policy are as follows:

Hitachi Block Snapshot Data Flow
GUID-16C23814-ABE7-44E4-9E8B-44244D2C612D-low.png

Hitachi Block Snapshot Policy
Classification TypeParametersValue
Hitachi BlockSpecify additional selectionsSelected
Logical Devices10323/10
Operation TypeParametersValueAssigned Nodes
SnapshotModeHardwareHitachi Block Device
Hardware TypeHitachi Block
RPO10 mins
Retention1 hour
Run OptionsRun on RPO
Quiesce...Not selected

Procedure

  1. Locate the node in the Nodes Inventory that will control the Hitachi Block Device via a CMD (Command Device) interface and check that it is authorized and online.

    This node is used by Protector to orchestrate snapshot creation and is identified as the Proxy Node when creating the Hitachi Block Device node in the next step. This node is known as an ISM node. The ISM node does not appear in the data flow.
  2. Create a new Hitachi Block Device node (unless one already exists) using the Hitachi Block Device Node Wizard and check that it is authorized and online. This node is where the production LDEV to be snapshotted is located.

    For a snapshot using an Hitachi Block classification, an Hitachi Block Device node is required.The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node.
  3. Define a policy as shown in the table above using the Policy Wizard, Hitachi Block Classification Wizard and Snapshot Operation Wizard.

    The Hitachi Block classification is grouped under Physical classifications. See How to create a policy.
  4. Draw a data flow as shown in the figure above, that shows only the Hitachi Block Device source node, using the Data Flow Wizard.

    At this stage the snapshot icon GUID-BE9FE34F-7D75-4E2B-A3B9-A068835415B2-low.png is not shown. See How to create a data flow.
  5. Assign the Snapshot operation to the Hitachi Block Device source node. The Block-Snapshot policy will then be assigned automatically.

    See How to apply a policy to nodes on a data flow.The Hitachi Block Snapshot Configuration Wizard is displayed.
  6. Select the Snapshot Pool by selecting one of the available Thin Image or hybrid pools.

  7. Leave the remaining Advanced Options at their default settings, then click OK.

    The snapshot icon GUID-BE9FE34F-7D75-4E2B-A3B9-A068835415B2-low.png is now shown superimposed over the source node.
  8. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
    NoteIf the Quiesce configured applications before backup option was not deselected in the Snapshot Operation Wizard, then a compiler warning message will be generated because Protector will not be able to quiesce applications using the LDEV.
  9. Locate the active data flow in the Monitor Inventory and open its Monitor Details page.

    The policy will be invoked repeatedly according to the RPO specified. The policy can also be manually triggered from the source node in the monitor data flow. You may want to manually trigger to create an initial snapshot. See How to trigger an operation from an active data flow.
  10. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • Snapshot jobs appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being snapshotted.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  11. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and snapshots via the Hitachi Block Snapshots Inventory, to ensure snapshots are being created.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device. The retention period of individual snapshots can be modified here if required.New snapshots will appear in the Hitachi Block Snapshots Inventory periodically as dictated by the RPO of the policy. Old snapshots will be removed periodically as dictated by the Retention Period of the policy.

How to snapshot a file system with Thin Image

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the Hitachi Block storage device.
  • The Protector Client software has been installed on the destination node that will act as a proxy for the Hitachi Block storage device. Note that for a Thin Image snapshot, the source and destination LDEVs are located on the same device.
  • The storage device has been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. Thin Image hardware snapshots of the P-VOL are created as S-VOLs residing within the same storage device. For more information, refer to About Thin Image differential and refreshed snapshots. The data flow and policy are as follows:

Hardware Snapshot Data Flow
GUID-DDBCE2A9-08D4-473D-938B-08137598C8E3-low.png

Path Snapshot Policy
Classification TypeParametersValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParametersValueAssigned Nodes
SnapshotModeHardwareOS Host
Hardware TypeHitachi Block
RPO10 mins
Retention1 hour
Run OptionsRun on RPO

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the production LDEV to be snapshotted is mounted.

    For a file system snapshot using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the node in the Nodes Inventory that will control the Hitachi Block Device via a CMD (Command Device) interface and check that it is authorized and online.

    This node is used by Protector to orchestrate snapshot creation and is identified as the Proxy Node when creating the Hitachi Block Device node in the next step. This node is known as an ISM node. The ISM node does not appear in the data flow.
  3. Create a new Hitachi Block Device node (unless one already exists) using the Hitachi Block Device Node Wizard and check that it is authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. Note that this node does not appear in the snapshot data flow diagram, but is identified when assigning the snapshot policy.
  4. Define a policy as shown in the table above using the Policy Wizard, Path Classification Wizard and Snapshot Operation Wizard.

    The Path classification is grouped under Physical classifications. See How to create a policy.
  5. Draw a data flow as shown in the figure above, that shows only the OS Host source node, using the Data Flow Wizard.

    At this stage the snapshot icon GUID-BE9FE34F-7D75-4E2B-A3B9-A068835415B2-low.png is not shown. See How to create a data flow.
  6. Assign the Snapshot operation to the OS Host source node. The Path-Snapshot policy will then be assigned automatically.

    See How to apply a policy to nodes on a data flow.The Hitachi Block Snapshot Configuration Wizard is displayed.
  7. Select the Snapshot Pool by selecting one of the available Thin Image or hybrid pools.

    Caution

    Filling a Thin Image pool to capacity will invalidate all snapshot data contained within that pool. All snapshots in the pool will have to be deleted before snapshotting can be resumed.

  8. Leave the remaining Advanced Options at their default settings, then click OK.

    The snapshot icon GUID-BE9FE34F-7D75-4E2B-A3B9-A068835415B2-low.png is now shown superimposed over the source node.
  9. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  10. Locate the active data flow in the Monitor Inventory and open its Monitor Details page.

    The policy will be invoked repeatedly according to the RPO specified. The policy can also be manually triggered from the source node in the monitor data flow. An initial snapshot will be taken shortly after rules distribution has completed. See How to trigger an operation from an active data flow.
  11. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • Snapshot jobs appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being snapshotted.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  12. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and snapshots via the Hitachi Block Snapshots Inventory, to ensure snapshots are being created.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device. The retention period of individual snapshots can be modified here if required.New snapshots will appear in the Hitachi Block Snapshots Inventory periodically as dictated by the RPO of the policy. Old snapshots will be removed periodically as dictated by the Retention Period of the policy.

How to replicate an Hitachi Block LDEV with ShadowImage

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the destination node that will act as a proxy for the Hitachi Block storage device. Note that for a ShadowImage replication, the source and destination LDEVs are located on the same device.
  • The storage device has been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting an LDEV allocated from a Hitachi Block storage device. This is useful when Protector has no way of interacting with the application or OS that is using the LDEV. The replication will be crash consistent, because Protector is not able to orchestrate the replication operation in conjunction with applications using the LDEV. A ShadowImage hardware replication of the P-VOL is created as an S-VOL residing within the same storage device. For more information, refer to About ShadowImage replication. The data flow and policy are as follows:

ShadowImage Replication Data Flow
GUID-3DEE93A7-7816-4C8F-8059-E9652FF2406F-low.png

Hitachi Block Replication Policy
Classification TypeParametersValue
Hitachi BlockSpecify additional selectionsSelected
Logical Devices10323/10
Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsRun on Schedule

(See below)

Hitachi Block Device (source),

Hitachi Block Device (destination)

Quiesce...Not selected
Schedule Item TypeParameterValuePolicy Operations
TriggerDaysSelect AllReplicate

(See above)

WeeksSelect All
TimeScheduled Time
Start Time15:00
Duration00:00

Procedure

  1. Locate the node in the Nodes Inventory that will control the Hitachi Block Device via a CMD (Command Device) interface and check that it is authorized and online.

    This node is used by Protector to orchestrate replication and is identified as the Proxy Node when creating the Hitachi Block Device node in the next step. This node is known as an ISM node. The ISM node does not appear in the data flow.
  2. Create a new Hitachi Block Device node (unless one already exists) using the Hitachi Block Device Node Wizard and check that it is authorized and online. This node is where the production LDEV to be replicated is located.

    For a replication using an Hitachi Block classification, an Hitachi Block Device node is required.The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. This node appears in the replication data flow as both the source and the destination node.
  3. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define an Hitachi Block classification using the Hitachi Block Classification Wizard.

      The Hitachi Block classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      In this example, ShadowImage replication will run as a batch operation based on a Trigger schedule. Continuous ShadowImage could also be implemented by using a continuous mover on the dataflow.
    3. Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules.

      See How to create a schedule.
  4. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the Hitachi Block Device source node connected to the same Hitachi Block Device via a Batch mover.

    ShadowImage is an in-system replication technology, so the Hitachi Block Device node is where both the source (P)VOL) and destination (S)VOL) volumes are located. See How to create a data flow.
  5. Assign the Block-Replicate policy to the Hitachi Block Device source node.

    See How to apply a policy to nodes on a data flow.
  6. Assign the Replicate operation to the Hitachi Block Device destination node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  7. Set the replication type to In System Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.

  8. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
    NoteIf the Quiesce configured applications before backup option was not deselected in the Replicate Operation Wizard, then a compiler warning message will be generated because Protector will not be able to quiesce applications using the LDEV.
  9. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create a replication according to the schedule specified in the policy. The policy can also be manually triggered from the source node in the monitor data flow.
    NoteNo replication will be created until it is first triggered manually or by the schedule.
    See How to trigger an operation from an active data flow.
  10. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • Replication jobs appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  11. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and refreshed.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A new ShadowImage replication will appear in the Hitachi Block Replications Inventory and be updated periodically as dictated by the schedule for the policy operation. The previous replication will be overwritten upon each refresh.

How to teardown the S-VOLs of a replication removed from a data flow

Before you begin

Either:

  • The corresponding replication operation must be removed from the dataflow where it is defined and that dataflow must be reactivated

or:

  • The dataflow defining the replication operation must be permanently deactivated.

An Hitachi block replication that was defined within or adopted by Protector must be explicitly removed from the underlying hardware as follows:

Procedure

  1. Locate the replication record (corresponding to the replication operation that has been removed) in the Hitachi Block Replications Inventory.

    Replications that are eligible for teardown are marked with an GUID-7C1FCEBD-FF8B-4A04-8CD4-AE52E43E5966-low.png in the top right corner of the tile.
  2. Select the replication record to tear down, then click Teardown from the context menu.

  3. The Teardown Hitachi Block Replication Dialog is displayed. If you are sure you want to proceed, type the word 'TEARDOWN', then click Teardown.

  4. Go to the Jobs Inventory to ensure that a teardown job has been initiated and wait for it to complete.

    The replication entry is not removed from the replications inventory until the teardown operation is completed successfully. If the teardown is unsuccessful, review the Logs Inventory to find out why. The teardown operation must be re-initiated by the user once the problem is resolved.

How to reactivate a replication operation that has been accidently deactivated

An Hitachi block replication that was defined within or adopted by Protector may have been accidently deactivated. One of following three cases will apply:

Case 1: Replication operation has not been removed but the data flow is deactivated

Protector considers a replication to have been removed from a dataflow only if the link between the source and destination has been removed or the source and/or destination node has been removed.

NoteIt is possible to edit the replication parameters as long as any changes are supported by the hardware for that replication type. Protector will still consider the replication instance to be the same.

Procedure

  1. If none of the above have occurred then the data flow can simply be reactivated by the user via the Data Flows Inventory.

    Because the replication has not been torn down, Protector will effectively re-adopt the corresponding replication from the storage hardware.

Case 2: Replication operation has been removed and the data flow has been reactivated

Protector considers a replication to have been removed from a dataflow if the link between the source and destination has been removed or the source and/or destination node has been removed.

Procedure

  1. If this is the case, then the data flow must have a new replication operation added back in and then be reactivated by the user via the Data Flows Inventory. Because Protector considers this new replication operation as an entirely new instance, the replication pair must be created from scratch on the storage array. The old replication becomes a static copy.

Case 3: The data flow containing the replication has been deleted

Protector considers the replication to have been removed.

Procedure

  1. If this is the case, then a new data flow must be created containing a new replication operation and then be reactivated by the user via the Data Flows Inventory.

    Because Protector considers this new replication operation as an entirely new instance, the replication pair must be created from scratch on the storage array. The old replication becomes a static copy.

How to create and use an Hitachi Block Host node

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node that will act as a proxy for the Hitachi Block storage device.
  • The storage device has been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

A block host node can be used in a data flow to represent a host machine that has LDEVs mounted on it that require protection. The block host can be used as a convenient alternative to specifying the hardware paths in the Hitachi Block Classification Wizard. Typically the block host node is used to represent an application server that is not directly supported by a Protector classification.

Procedure

  1. Identify the existing Hitachi Block Storage node where the LDEVs that require protection reside and ensure it is authorised and online.

  2. Create a new Hitachi Block Host node using the Hitachi Block Host Node Wizard. This node will represent the host where the production LDEVs to be protected are mounted.

    1. Specify a Node Name that reflects the name or purpose of the host it represents. This is where the LDEVs are mounted.

    2. Allocate the block host node to the same Access Control Resource Group as that of the block device node specified in the next step.

    3. Select the Hitachi Block Device. This is the storage device where the LDEVs reside.

    4. Optionally, specify the LDEVs mounted on the host that are to be protected. Normally you would specify these in the Hitachi Block Classification Wizard, but it is often more logical to capture this information here.

  3. Check that the newly created Hitachi Block Host node is authorised and online in the Nodes Inventory.

  4. Place the block host node on data flows in the same way that you would a block device source node.

    NoteYou can only use a block host node as a source node. You cannot, for example, replicate to a block host node.
  5. Create a policy that includes a snapshot and/or replicate operation, and an associated block classification using the Hitachi Block Classification Wizard.

    1. Either: Select Use Hitachi Block Host selections to indicate that you want only the LDEVs specified in the Hitachi Block Host Node Wizard to be protected.

    2. Or: Select Specify additional selections to indicate that you want the LDEVs specified in the Hitachi Block Host Node Wizard to be protected in addition to any specified in the Logical Devices field of the classification.

  6. Assign the snapshot or replicate operation to the block host node in the data flow. See How to apply a policy to nodes on a data flow.

  7. Activate the data flow as normal. See How to activate a data flow.

How to replicate a file system with ShadowImage

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the Hitachi Block storage device.
  • The Protector Client software has been installed on the destination node that will act as a proxy for the Hitachi Block storage device. Note that for a ShadowImage replication, the source and destination LDEVs are located on the same device.
  • The storage device has been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. A ShadowImage hardware replication of the P-VOL is created as an S-VOL residing within the same storage device. For more information, refer to About ShadowImage replication. The data flow and policy are as follows:

ShadowImage Replication Data Flow
GUID-C9033688-DD7B-4CE4-8214-71414552CCCE-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsRun on Schedule

(See below)

OS Host,

Hitachi Block Device

Schedule Item TypeParameterValuePolicy Operations
TriggerDaysSelect AllReplicate

(See above)

WeeksSelect All
TimeScheduled Time
Start Time15:00
Duration00:00

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the production LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the node in the Nodes Inventory that will control the Hitachi Block Device via a CMD (Command Device) interface and check that it is authorized and online.

    This node is used by Protector to orchestrate replication and is identified as the Proxy Node when creating the Hitachi Block Device node in the next step. This node is known as an ISM node. The ISM node does not appear in the data flow.
  3. Create a new Hitachi Block Device node (unless one already exists) using the Hitachi Block Device Node Wizard and check that it is authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. This node appears in the replication data flow as the destination node.
  4. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define a Path classification using the Path Classification Wizard.

      The a Path classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      In this example, ShadowImage replication will run as a batch operation based on a Trigger schedule. Continuous ShadowImage could also be implemented by using a continuous mover on the dataflow.
    3. Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules.

      See How to create a schedule.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the Hitachi Block Device via a Batch mover.

    ShadowImage is an in-system replication technology, so the Hitachi Block Device node is where both the source (P-VOL) and destination (S-VOL) volumes are located. See How to create a data flow.
  6. Assign the Path-Replicate policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the Replicate operation to the Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  8. Set the replication type to In System Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.

  9. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  10. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create a replication according to the schedule specified in the policy. The policy can also be manually triggered from the source node in the monitor data flow.
    NoteNo replication will be created until it is first triggered manually or by the schedule.
    See How to trigger an operation from an active data flow.
  11. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • Replication jobs appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  12. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and refreshed.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A new ShadowImage replication will appear in the Hitachi Block Replications Inventory and be updated periodically as dictated by the schedule for the policy operation. The previous replication will be overwritten upon each refresh.

How to replicate a file system with Refreshed Thin Image

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the Hitachi Block storage device.
  • The Protector Client software has been installed on the destination node that will act as a proxy for the Hitachi Block storage device. Note that for a Refreshed Thin Image replication, the source and destination LDEVs are located on the same device.
  • The storage device has been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. A Refreshed Thin Image hardware replication of the P-VOL is created as an S-VOL residing within the same storage device. For more information, refer to About Thin Image differential and refreshed snapshots. The data flow and policy are as follows:

Refreshed Thin Image Replication Data Flow
GUID-F700A02B-643E-4E1F-81DF-C46C47755935-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsRun on Schedule

(See below)

OS Host,

Hitachi Block Device

Schedule Item TypeParameterValuePolicy Operations
TriggerDaysSelect AllReplicate

(See above)

WeeksSelect All
TimeScheduled Time
Start Time15:00
Duration00:00

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the production LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the node in the Nodes Inventory that will control the Hitachi Block Device via a CMD (Command Device) interface and check that it is authorized and online.

    This node is used by Protector to orchestrate replication and is identified as the Proxy Node when creating the Hitachi Block Device node in the next step. This node is known as an ISM node. The ISM node does not appear in the data flow.
  3. Create a new Hitachi Block Device node (unless one already exists) using the Hitachi Block Device Node Wizard and check that it is authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. This node appears in the replication data flow as the destination node.
  4. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define a Path classification using the Path Classification Wizard.

      The Path classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      Refreshed Thin Image replication runs as a batch operation based on a Trigger schedule.
    3. Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules.

      See How to create a schedule.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the Hitachi Block Device via a Batch mover.

    Refreshed Thin Image is an in-system replication technology, so the Hitachi Block Device node is where both the source (P-VOL) and destination (S-VOL) volumes are located. See How to create a data flow.
  6. Assign the Path-Replicate policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the Replicate operation to the Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  8. Set the replication type to Refreshed Snapshot, then choose a Pool from one of the available Thin Image Pools. Leave the remaining parameters at their default settings and click OK.

  9. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  10. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create a replication according to the schedule specified in the policy. The policy can also be manually triggered from the source node in the Monitor Details.
    NoteNo replication will be created until it is first triggered manually or by the schedule.
    See How to trigger an operation from an active data flow.
  11. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • Replication jobs appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  12. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and refreshed.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A new Refreshed Thin Image replication will appear in the Hitachi Block Replications Inventory and be updated periodically as dictated by the schedule for the policy operation. The previous replication will be overwritten upon each refresh.

How to replicate a file system with Universal Replicator

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the primary Hitachi Block storage device.
  • The Protector Client software has been installed on the nodes that will act as proxies for both primary and secondary Hitachi Block storage devices. Note that for a Universal Replicator replication, the source and destination LDEVs are located on different devices.
  • The primary and secondary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. A Universal Replicator hardware replication of the P-VOL is created as an S-VOL residing within a different storage device. For more information, refer to About Universal Replicator. The data flow and policy are as follows:

Universal Replicator Replication Data Flow
GUID-62F61A2C-EC17-4729-A32B-443B75EDF659-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRefresh OptionsSelect a schedule for 'Refresh on schedule'OS Host,

Secondary Hitachi Block Device

Source OptionsDo not Quiesce configured applications before backup.

See note in the 'compile and activation' step below.

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the primary LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.

    These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM nodes. The ISM nodes do not appear in the data flow.
  3. Create new primary and secondary Hitachi Block Device nodes (unless they already exist) using the Hitachi Block Device Node Wizard and check that they are authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. The secondary Hitachi Block Device node appears in the replication data flow as the destination node. The primary Hitachi Block Device node is represented in the data flow by the OS Host node where the primary LDEV is mounted.
  4. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy.

    1. Define a Path classification using the Path Classification Wizard.

      The a Path classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      When an application is used to automatically select the PVOLs used in this continuous replication a trigger schedule can be defined that invokes the application to re-evaluate the PVOLs involved.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the secondary Hitachi Block Device via a Continuous mover.

    Universal Replicator is a remote replication technology, so the Hitachi Block Device node shown on the data flow is the where the destination (S-VOL) volume is located. See How to create a data flow.
  6. Assign the Path-Replicate policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the Replicate operation to the Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  8. Set the replication type to Asynchronous Remote Clone, then:

    1. Choose a Pool from one of the available Dynamic Pools.

    2. Select a Source Journal on the primary Hitachi Block Device node.

    3. Select a Destination Journal (the secondary Hitachi Block Device node is selected implicitly).

    4. Leave the remaining parameters at their default settings and click OK.

  9. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
    NoteBecause Universal Replicator cannot guarantee that the quiesce period constraint for Microsoft VSS can be met, the Rules Compiler will generate a warning if Quiesce configured applications before backup is selected in the policy's Replicate operation.
  10. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create and then maintain the replication according to the policy.
  11. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will see:
    • An initial replication job appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  12. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and maintained.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A Universal Replicator replication will appear in the Hitachi Block Replications Inventory and will be updated as and when writes to the primary are made.

How to replicate a file system with TrueCopy

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the primary Hitachi Block storage device.
  • The Protector Client software has been installed on the nodes that will act as proxies for both primary and secondary Hitachi Block storage devices. Note that for a TrueCopy replication, the source and destination LDEVs are located on different devices.
  • The primary and secondary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. A TrueCopy hardware replication of the P-VOL is created as an S-VOL residing within a different storage device. For more information, refer to About TrueCopy replication. The data flow and policy are as follows:

TrueCopy Replication Data Flow
GUID-E3D3DAF5-8608-4DD2-ACE8-BC7E0CE68588-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRefresh OptionsSelect a schedule for ‘Refresh on schedule’OS Host,

Secondary Hitachi Block Device

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the primary LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.

    These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM nodes. The ISM nodes do not appear in the data flow.
  3. Create new primary and secondary Hitachi Block Device nodes (unless ones already exists) using the Hitachi Block Device Node Wizard and check that they are authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node.The secondary Hitachi Block Device node appears in the replication data flow as the destination node. The primary Hitachi Block Device node is represented in the data flow by the OS Host node where the primary LDEV is mounted.
  4. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define a Path classification using the Path Classification Wizard.

      The a Path classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      When an application is used to automatically select the PVOLs used in this continuous replication a trigger schedule can be defined that invokes the application to re-evaluate the PVOLs involved.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the secondary Hitachi Block Device via a Continuous mover.

    TrueCopy is a remote replication technology, so the Hitachi Block Device node shown on the data flow is the where the destination secondary volume (S-VOL) is located. See How to create a data flow.
  6. Assign the Path-Replicate policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the Replicate operation to the Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  8. Set the replication type to Synchronous Remote Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.

  9. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  10. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create and then maintain the replication according to the policy.
  11. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • An initial replication job appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  12. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and maintained.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A TrueCopy replication will appear in the Hitachi Block Replications Inventory and will be updated as and when writes to the primary are made.

How to replicate a file system with Global-Active Device

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the primary Hitachi Block storage device.
  • The Protector Client software has been installed on the nodes that will act as proxies for both primary and secondary Hitachi Block storage devices. Note that for a Global-Active Device replication, the source and destination LDEVs are located on different devices.
  • The primary and secondary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. A Global-Active Device hardware replication of the P-VOL is created as an S-VOL residing within a different storage device. For more information, refer to About Global-Active Device replication. The data flow and policy are as follows:

Global-Active Device Replication Data Flow
GUID-99603CFA-1FA4-4533-9C5D-F891CA41CAC1-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRefresh OptionsSelect a schedule for ‘Refresh on schedule’OS Host,

Secondary Hitachi Block Device

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the primary LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.

    These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM nodes. The ISM nodes do not appear in the data flow.
  3. Create new primary and secondary Hitachi Block Device nodes (unless ones already exists) using the Hitachi Block Device Node Wizard and check that they are authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. The secondary Hitachi Block Device node appears in the replication data flow as the destination node. The primary Hitachi Block Device node is represented in the data flow by the OS Host node where the primary LDEV is mounted.
  4. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define a Path classification using the Path Classification Wizard.

      The Path classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      When an application is used to automatically select the PVOLs used in this continuous replication a trigger schedule can be defined that invokes the application to re-evaluate the PVOLs involved.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the secondary Hitachi Block Device via a Continuous mover.

    Global-Active Device is a remote replication technology, so the Hitachi Block Device node shown on the data flow is the where the destination (S-VOL) volume is located. See How to create a data flow.
  6. Assign the Path-Replicate policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the Replicate operation to the Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  8. Set the replication type to Active-Active Remote Clone, then:

    1. Choose a Pool from one of the available Dynamic Pools.

    2. Choose a Target Quorum from one of those listed.

    3. Leave the remaining parameters at their default settings and click OK.

  9. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  10. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create and then maintain the replication according to the policy.
  11. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • An initial replication job appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  12. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and maintained.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A Global-Active Device replication will appear in the Hitachi Block Replications Inventory and will be updated as and when writes to the primary or secondary are made.

How to implement 3DC multi-target with delta UR replication

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the primary Hitachi Block storage device.
  • The Protector Client software has been installed on the nodes that will act as proxies for primary, secondary and tertiary Hitachi Block storage devices. Note that for GAD, TC and UR replications, the source and destination LDEVs are located on different devices.
  • The primary, secondary and tertiary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when protecting data that resides on a file system created on an LDEV allocated from a Hitachi Block storage device. A GAD or TC replication of the P-VOL at the primary site is created as an S-VOL at the secondary site. A UR replication of the P-VOL is created as an S-VOL at the tertiary site. A Delta UR replication is created between the S-VOLs at the secondary and tertiary sites (this remains suspended unless primary site failure occurs). For more information, refer to About three datacentre multi-target with delta.

NoteProtector currently only supports the setup of 3DC Multi-target with Delta Replication. In the event of a primary, secondary or tertiary site failure, the Monitor Details data flow will display notifications indicating any problems with the corresponding movers, and appropriate messages will appear in the Logs Inventory.
  • For primary site failure:
    1. The Delta UR failover link will be invoked automatically by the underlying hardware storage devices to provide near immediate protection of the secondary site.
    2. The data flow should be dissociated from Protector before the hardware storage devices are recovered, following procedures defined in the relevant storage device operating manuals. See How to dissociate a replication from Protector.
    3. The data flow for the recovered replication should be re-adopt into Protector and re-activated. See How to adopt a replication into Protector
  • For secondary or tertiary site failure, the data flow should remain actived in Protector. Once the hardware storage devices are recovered, Protector will clear its notifications and resume.

The data flow and policy are as follows:

3DC Multi-target with Delta Replication Data Flow
GUID-C7AAE59A-4395-479D-8EFE-7C6F0561D405-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsN/A

(GAD is a continuous replication, so the Run option is ignored)

Secondary Hitachi Block Device (from the primary)

ReplicateRun OptionsN/A

(UR is a continuous replication, so the Run option is ignored)

Tertiary Hitachi Block Device (from the primary)

ReplicateRun OptionsN/A

(Delta UR is a continuous replication, so the Run option is ignored)

Tertiary Hitachi Block Device (from the secondary)

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the primary LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the nodes in the Nodes Inventory that will control the primary, secondary and tertiary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.

    These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and tertiary sites, and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM nodes. The ISM nodes do not appear in the data flow.
  3. Create new primary, secondary and tertiary Hitachi Block Device nodes (unless ones already exists) using the Hitachi Block Device Node Wizard and check that they are authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. The secondary and tertiary Hitachi Block Device nodes appear in the replication data flow as the destination nodes. The primary Hitachi Block Device node is represented in the data flow by the OS Host node where the primary LDEV is mounted.
  4. Define a policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define a Path classification using the Path Classification Wizard.

      The Path classification is grouped under Physical in the Policy Wizard.
    2. Define three Replicate operations (these represent the primary to secondary GAD or UR, primary to tertiary UR and secondary to tertiary Delta UR replications) using the Replicate Operation Wizard.

      GAD and UR replications run as continuous operations and thus no schedule needs to be defined.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the secondary and tertiary Hitachi Block Devices via Continuous movers, and the secondary connected to the tertiary Hitachi Block Device via a Failover mover.

    GAD and UR are remote replication technologies, so the Hitachi Block Device nodes shown on the data flow are where the secondary and tertiary destination (S-VOL) volumes are located. See How to create a data flow.
  6. Assign the Path-Replicate policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the first Replicate operation to the secondary Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  8. Set the replication type to Active-Active Remote Clone, then:

    1. Choose a Pool from one of the available Dynamic Pools.

    2. Choose a Target Quorum from one of those listed.

    3. Leave the remaining parameters at their default settings and click OK.

  9. Assign the second Replicate operation to the tertiary Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  10. Set the replication type to Asynchronous Remote Clone, then:

    1. Choose a Pool from one of the available Dynamic Pools.

    2. Choose a Source Journal from one of those listed for the primary node.

    3. Choose a Destination Journal from one of those listed.

    4. Leave the remaining parameters at their default settings and click OK.

  11. Assign the third Replicate operation to the tertiary Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  12. Set the replication type to Asynchronous Remote Failover, then:

    1. Choose a Pool from one of the available Dynamic Pools.

    2. Choose a Source Journal from one of those listed for the tertiary node.

    3. Leave the remaining parameters at their default settings and click OK.

      Protector will automatically use the same Destination Journal as selected for the Asynchronous Remote Clone replication configured in the preceding steps.
      NoteIf you specify a Mirror Unit for this Asynchronous Remote Failover replication, then it must differ from the one selected for the Asynchronous Remote Clone replication in the preceding steps.
  13. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  14. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create and then maintain the replication accordingly.
  15. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • Initial replication jobs appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  16. Review the status of each Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the GAD and UR replications are being created and maintained.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication processes can be paused and resumed from here if required.There will be three replication records in the Hitachi Block Replications Inventory corresponding to the GAD, the active UR and the suspended failover UR replication.

How to synchronize snapshots with a replication

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the primary Hitachi Block storage device.
  • The Protector Client software has been installed on the nodes that will act as proxies for both primary and secondary Hitachi Block storage devices. Note that for a TrueCopy replication, the source and destination LDEVs are located on different devices.
  • The primary and secondary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task demonstrates how snapshot and replication operations can be synchronized with one another to ensure that all are performed at the same point in time and thus capture the identical state of the source data. For more information refer to About synchronization groups.

Here, a TrueCopy replication of the P-VOL is created as an S-VOL residing within a different storage device. Synchronized TI snapshots are then created on the primary and secondary Block Storage devices. For more information, refer to About local and remote snapshots. The data flow and policy are as follows:

TrueCopy Replication with Local and Remote Thin Image Snapshots Data Flow
GUID-54F1F5BE-46A0-4F46-A9B7-052EB8448010-low.png
Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsN/A

(TrueCopy is a continuous replication, so the Run option is ignored)

OS Host,

Secondary Hitachi Block Device

Snapshot

(on local device)

ModeHardwareOS Host
Hardware TypeHitachi Block
Retention2 hour
RPO10 mins
Run OptionsRun on Schedule

(see synch group schedule below)

Snapshot

(on remote device)

ModeHardwareSecondary Hitachi Block Device
Hardware TypeHitachi Block
Retention1 hours

(this can differ from the local snapshot)

RPO10 mins

(this must match the local snapshot)

Run OptionsRun on Schedule

(see synch group schedule below)

Synchronization Group Schedule
Schedule Item TypeParameterValuePolicy Operations
TriggerN/A

(this schedule defines a synchronization group name for local and remote snapshots. All parameters are ignored.)

N/ASnapshot (local),

Snapshot (remote)

Procedure

  1. Locate the source node in the Nodes Inventory and check that it is authorized and online. This node is where the primary LDEV to be replicated is mounted.

    For a file system replication using a Path classification, a basic OS Host node is required. It is not necessary to create the source node in this case since all Protector client nodes default to this type when installed. See How to authorize a node.
  2. Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.

    These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM nodes. The ISM nodes do not appear in the data flow.
  3. Create new primary and secondary Hitachi Block Device nodes (unless ones already exists) using the Hitachi Block Device Node Wizard and check that they are authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. The secondary Hitachi Block Device node appears in the replication data flow as the destination node. The primary Hitachi Block Device node is represented in the data flow by the OS Host node where the primary LDEV is mounted.
  4. Define a policy as shown in the table above using the Policy Wizard. This policy contains operations for the replication, local and remote snapshots. See How to create a policy

    1. Define a Path classification using the Path Classification Wizard.

      The Path classification is grouped under Physical in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.

      TrueCopy replication runs as a continuous operation and thus no schedule needs to be defined.
    3. Define a local Snapshot operation using the Snapshot Operation Wizard.

      Thin Image snapshots run based on the RPO. However we also want to synchronize the local and remote snapshots. This is done by defining a trigger schedule that is applied to both the local and remote snapshot operations.
    4. Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules in the Snapshot Operation Wizard for the local snapshot.

      Only the trigger schedule name is required; the parameters are not relevant here since the RPO of the local snapshot dictates when the local and remote snapshot operations are triggered. See How to create a schedule.
    5. Define a remote Snapshot operation using the Snapshot Operation Wizard.

      To synchronize the local and remote snapshots, apply the same trigger schedule to this snapshot operation that was applied to the local snapshot operation.
      NoteThe local and remote snapshots must have the same RPO, otherwise a rules compiler error will be generated.
  5. Draw a data flow as shown in the figure above using the Data Flow Wizard, that shows the OS Host source node connected to the secondary Hitachi Block Device via a Continuous mover.

    TrueCopy is a remote replication technology, so the Hitachi Block Device node shown on the data flow is the where the destination (S-VOL) volume is located. See How to create a data flow.
  6. Assign the Path-Replicate-Snaphot-Snapshot policy to the OS Host source node.

    See How to apply a policy to nodes on a data flow.
  7. Assign the local Snapshot operation to the OS Host source node.

    The Hitachi Block Snapshot Configuration Wizard is displayed.
  8. Select the Snapshot Pool by selecting one of the available Thin Image or hybrid pools.

  9. Leave the remaining Advanced Options at their default settings, then click OK.

    The snapshot icon GUID-BE9FE34F-7D75-4E2B-A3B9-A068835415B2-low.png is now shown superimposed over the source node.
  10. Assign the remote Snapshot operation to the remote Hitachi Block Device node.

    The Hitachi Block Snapshot Configuration Wizard is displayed.
  11. Select the Snapshot Pool by selecting one of the available Thin Image or hybrid pools.

  12. Leave the remaining Advanced Options at their default settings, then click OK.

    The snapshot icon GUID-BE9FE34F-7D75-4E2B-A3B9-A068835415B2-low.png is now shown superimposed over the source node.
  13. Assign the Replicate operation to the remote Hitachi Block Device node.

    The Hitachi Block Replication Configuration Wizard is displayed.
  14. Set the replication type to Synchronous Remote Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.

  15. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  16. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create and then maintain the replication according to the policy. Snapshot operations will be triggered synchronously on the source and destination nodes according to the RPO.
  17. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • An initial replication job appearing in the Jobs area below the data flow that cycles through stages and ends in Progress - Completed.
    • Repeated replication and snapshot jobs appearing for the source node in the Jobs area triggered according to the RPO.
    • Repeated snapshot jobs appearing for the destination node in the Jobs area synchronized to the local snapshot.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  18. Review the status of the local and remote Hitachi Block Devices via the relevant Hitachi Block Device Details to ensure the replication and snapshots are being created and maintained.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.The replication process can be paused and resumed from here if required.A TrueCopy replication will appear in the remote Hitachi Block Replications Inventory and will be updated as and when writes to the primary are made. New snapshots will appear in the local and remote Hitachi Block Snapshots Inventory periodically as dictated by the RPO of the policy. Old snapshots will be removed periodically as dictated by the Retention Period of the policy.

How to automatically mount a snapshot or replication

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the source node where the production Hitachi Block LDEV is mounted. Note that the LDEV is actually located on the primary Hitachi Block storage device.
  • The Protector Client software has been installed on the nodes that will act as proxies for both primary and secondary Hitachi Block storage devices. Note that for a TrueCopy replication, the source and destination LDEVs are located on different devices.
  • The primary and secondary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.

This task describes the steps to follow when replicating data on an a Hitachi Block storage LDEV, then using the replication for repurposing or performing a proxy backup. A TrueCopy replication of the P-VOL is created as an S-VOL and the S-VOL is then automatically mounted on a designated host machine. For more information, refer to About TrueCopy replication and About the automated Mount operation. The data flow and policy is as follows:

TrueCopy Replication and Automated Mount Data Flow
GUID-E3D3DAF5-8608-4DD2-ACE8-BC7E0CE68588-low.png

Path Replication Policy
Classification TypeParameterValue
PathIncludeE:\testdata

(E: is where the Hitachi Block LDEV is mounted)

Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsN/A

(TrueCopy is a continuous replication, so the Run option is ignored)

OS Host,

Secondary Hitachi Block Device

MountRun OptionsRun on Schedule

(see schedule below)

Secondary Hitachi Block Device
Source OptionsPre Script

Post Script

(scripts are user defined and application dependent)

Schedule Item TypeParameterValuePolicy Operations
Trigger TimeDaysSelect AllMount

(See above)

WeeksSelect All
TimeScheduled Time
Start Time13:00
Duration00:00

Procedure

  1. Follow the steps shown in How to replicate a file system with TrueCopy to create a TrueCopy replication, but with the addition of the Mount operation as follows:

  2. Add the Mount operation to the Path-Replicate policy as shown in the table above using the Policy Wizard. See How to create a policy

    1. Define a Mount operation using the Mount Operation Wizard.

      The Mount operation is initiated using a Trigger schedule.
    2. Define the Trigger schedule shown in the table above, using the Schedule Wizard which is accessed by clicking on Manage Schedules.

      See How to create a schedule.
  3. Assign the Mount operation to the Hitachi Block Device node.

    The Hitachi Block Mount Configuration Wizard is displayed.
  4. Set the mount operation type set to Proxy Backup and click Next.

    Proxy Backup can be used in conjunction with a continuous replication because it only pauses the replication while the proxy backup script is running.
  5. Set the mount level to OS and click Next.

  6. Set the Host to the node where the replication S-VOL is to be mounted and click Next.

  7. Set the Mount Location to Drive starting at letter, select an available drive letter, then click Finish.

  8. Compile and activate the data flow, checking carefully that there are no errors.

    A compiler warning (10366) will be generated stating that the replication will stop copying data to the destination while performing the proxy backup. This is expected behaviour. See How to activate a data flow.
  9. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    The policy will be invoked automatically to create and then maintain the replication according to the policy. The proxy backup operation will run based on the specified schedule, pausing the replication while it executes.
  10. Watch the active data flow via the Monitor Details to ensure the policy is operating as expected.

    For a healthy data flow you will periodically see:
    • An initial replication job appearing in the Jobs area below the data flow that cycle through stages and ending in Progress - Completed.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
    For a problematic data flow you may see:
    • Permanent Node Status icons appear over nodes and associated warning messages displayed to the right of the data flow area.
    • Backup jobs appearing in the Jobs area below the data flow that cycle through stages and terminating in Progress - Failed.
    • Warning and error messages appearing in the Logs area below the data flow indicating failed events.
  11. Review the status of the Hitachi Block Device via the relevant Hitachi Block Device Details and replications via the Hitachi Block Replications Inventory, to ensure the replication is being created and maintained.

    Hitachi Block Devices require ongoing surveillance to ensure that they are operating correctly and sufficient resources are available to store your data securely. See How to view the status of a Hitachi Block storage device.A TrueCopy replication will appear in the Hitachi Block Replications Inventory and will be updated as and when writes to the primary are made.

How to mount an Hitachi block snapshot or replication

Before you begin

It is assumed that a file system path policy that creates hardware snapshots or a replication has been implemented and that at least one snapshot or replication has been created on the designated Block storage device. See How to snapshot a file system with Thin Image or How to replicate a file system with ShadowImage for an example of how to do this.

NoteIt is not possible to mount the S-VOL of a GAD replication, paused or otherwise.

This task describes the steps to follow when mounting a file system path snapshot or replication from a Block storage device to the node other than the one from which it originated:

Procedure

  1. Identify the destination where the data set is to be restored. Here we will mount a snapshot or replication to a destination machine and volume.

    Depending on the scenario, you can mount the snapshot or replication to its original node as a different volume or to a different node entirely. You can control the level of the mount operation so that the snapshot is added to a host group, through to mounting to the host OS.
  2. Ensure that the restore location is prepared to receive the snapshot or replication data set by locating the node in the Nodes Inventory and checking it is authorized and online.

    Note
    • For Host and OS level mounting, the mount location must have Protector Client software installed.
    • SAN level mount does not specify a host so Protector Client software does not need to be installed.
  3. Locate the data set to be mounted by navigating to the Hitachi Block Snapshots Inventory or Hitachi Block Replications Inventory for the Hitachi Block storage device in question.

    See How to view the status of a Hitachi Block storage device.
  4. Select the snapshot or replication that you want to mount, then click Mount to open the Hitachi Block Snapshot or Replication Mount Wizard.

    1. Select the mount level (SAN, Host or OS).

    2. Choose Automatically discover or a Selected Host Group.

    3. For SAN mount click Finish, for other mount types click Next.

    4. Specify the Host (i.e. the target machine).

    5. For Host mount click Finish, for OS mount click Next.

    6. Specify the Mount Location.

    7. For OS mount click Finish.

    The Jobs Inventory is displayed and a mount job will appear that cycles through stages and ending in Progress - Completed.
  5. Once the mount process is complete, further steps may be needed to fix-up the data set before using it. In this example we will assume that no additional work is required other than inspecting the restored data on the target machine.

    The amount of fix-up work required depends on the applications accessing the restored data.
    NoteThis example mounts data created using a Path classification. If you are backing up one of the application types directly supported by Protector, then you should use one of the Application classifications and refer to the appropriate Application Guide listed in Related documents).
  6. Mounted snapshots or replications have a mount icon GUID-FF937C6E-2C7B-44F8-A45E-A600DC6AB088-low.png displayed on the corresponding tile in the Hitachi Block Snapshots Inventory or Hitachi Block Replications Inventory. Once the mounted snapshot or replication is finished with, click Unmount to unmount it.

How to revert a file system path from a snapshot or local replication

Before you begin

It is assumed that a file system path policy that creates snapshots (TI) or a local replication (RTI or SI) has been implemented and that at least one snapshot or replication has been created on the designated Block storage device. See How to snapshot a file system with Thin Image, How to replicate a file system with Refreshed Thin Image or How to replicate a file system with ShadowImage for an example of how to do this.

This task describes the steps to follow when reverting a file system path snapshot or local replication from a Block storage device to the node from which the snapshot originated:

Procedure

  1. Identify the destination where the data set is to be restored. You can only revert the snapshot or local replication to its original node and volume, so the destination will be the same machine and volume from which the data originated.

  2. Ensure that the restore location is prepared to receive the reverted data set by locating the node in the Nodes Inventory and checking it is authorized and online.

    The revert location must have Protector Client software installed.
  3. Stop any applications that access the revert location and ensure the filesystem is unmounted from the OS.

    For supported applications, these additional steps are described in the appropriate Application Guide (see Related documents). For other applications, consult the vendor's documentation.
  4. The existing snapshot or local replication operation (and any replications immediately up and downstream of it) should be paused while revert is performed.

    CautionIf snapshots and replications are combined on a data flow it is not possible to deactivate the scheduling of snapshots without also tearing down any replications on that data flow.
    See How to deactivate an active data flow.
  5. Locate the data set to be reverted by navigating to the Hitachi Block Snapshots Inventory or Hitachi Block Replications Inventory for the Hitachi Block storage device in question.

    See How to view the status of a Hitachi Block storage device.
  6. Select the snapshot or local replication that you want to revert to, then click Revert to open the Hitachi Block Revert Wizard. The word 'REVERT' must be typed in to enable the revert operation to proceed.

    CautionThe process of reverting data will result in the overwriting of all of the original data that exists on the revert location.

    Ensure that any critical data is copied to a safe location or is included in the data set being restored.

  7. Once the revert process is complete, further steps may be needed to fix-up the data set before using it. In this example we will assume that no additional work is required other than inspecting the restored data on the target machine.

    NoteWhen reverting a volume on a Windows machine, it is necessary to perform a reboot to ensure the volume is remounted correctly. For dynamic disks, if the reverted volume's status is indicated as 'Healthy (At Risk)' then it will be necessary to Offline then Online the volume via the Windows Disk Management console.
    The amount of fix-up work required depends on the applications accessing the restored data.
    NoteThis example restores data created using a Path classification. If you are backing up one of the application types directly supported by Protector, then you should use one of the Application classifications and refer to the appropriate Application Guide listed in Related documents).
  8. Restart any applications that access the restored data.

    For supported applications, these additional steps are described in the appropriate Application Guide (see Related documents). For other applications, consult the vendor's documentation.
  9. Resume any backup policies for the reverted data set.

    Data flows can be reactivated via the Data Flows Inventory.

How to swap (takeover/takeback) a replication

Before you begin

It is assumed that you have implemented a simple file system path, active-passive (i.e. TC or UR) replication policy with production applications accessing the primary volume(s) of the replication pair. See How to replicate a file system with TrueCopy for an example of how to do this.

If you are swapping an active-active (GAD) replication then additional steps may be required, especially if using a cross-path setup (refer to About Hitachi block based replication swapping (takeover/takeback) and About Global-Active Device Cross Path).

In the case of primary site maintenance, application failure, primary volume failure or disaster recovery, it may be necessary to move production to the secondary site, resolve the issue at the primary site and then move production back to the primary site.

Procedure

  1. Move production from the primary to the secondary site by performing a Swap as follows:

    1. Stop any applications that access the primary volumes to be taken over and unmount the filesystem from the OS.

    2. Locate the replication to be taken over by navigating to the Hitachi Block Replication Details (Storage) on the secondary device. See How to view the status of a Hitachi Block storage device.

      You will see that the replication's Type (displayed on the corresponding tile in the Hitachi Block Replications Inventory) is Active Full Copy and the Swapped state (displayed on the Hitachi Block Replication Details (Storage)) is No.
    3. From the Hitachi Block Replication Details (Storage) click the Swap button.

      The Hitachi Block Replication Swap Wizard will appear warning that swapping can cause data loss, however as long as access to the primary volumes has been stopped, it will be safe to proceed.
    4. Select a direction from the “Direction” dropdown. This is the intended final direction of the replication once the swap operation is complete.

    5. Type the word 'SWAP' into the Confirm Swap field and click OK.

      The Jobs Inventory is displayed and a new job will appear indicating that a Swap Replication operation is in progress. Click on the Job Type in the table to open the Job Details which list the log messages relating to the swap operation.
    6. Return to the Hitachi Block Replication Details (Storage) and review the replication's status:

      • If the swap is successful then the Swapped state is set to Yes, indicating that the replication is now reversed (S-VOL to P-VOL) and is back in PAIR status. A Swapped status badge (see Monitor Status Badges) will also appear above the replication's mover on the Monitor Details.
      • If the swap cannot be completed due to a P-VOL or data link fault then the Swapped state is set to No and Suspend for Swap state is set to Yes, indicating that the swap is not yet complete and is in SSWS status. Further action will be required on the primary block storage device or data link before the replication process can be re-established, but the secondary will be writeable.
      NoteThe flow direction of a replication pair should ONLY be determined by referring to the Summary - Swapped field on the Hitachi Block Replication Details (Storage) for the secondary Block storage device. Primary and secondary volume information shown in the replications Session Log Details and associated Log Attachments Dialog should not be used to infer the flow direction following a swap.
    7. Start any applications that access the secondary volumes that have been taken over and resume production at the secondary site.

  2. Perform any maintenance and recovery tasks at the primary site, resolve any faults with the data link between sites, then go back to the Hitachi Block Replication Details (Storage) for the secondary to determine the status of the S-VOLs. Perform one of the following actions as appropriate:

    1. If the replication is Swapped (S-VOL status = PAIR) then proceed with moving production back to the primary site when ready, as detailed in step 3 below.

    2. If the replication is Suspended for Swap (S-VOL status = SSWS) then click the Unsuspend button. The swap operation will be completed as described above. Production at the secondary site can now continue with replication to the primary site in operation.

    3. If the S-VOL status is some value other than PAIR or SSWS then you will need to run the following CCI command sequence from outside Protector to recover the replication pairing: pairsplit -R, pairsplit -S, paircreate

  3. Move production back to the primary site when ready to resume normal operations by performing a Swap as follows:

    1. Stop any applications that access the secondary volumes to be taken back.

    2. Locate the replication to be taken back by navigating to the Hitachi Block Replication Details (Storage) on the secondary device. See How to view the status of a Hitachi Block storage device.

      You will see that the replication's Type (displayed on the corresponding tile in the Hitachi Block Replications Inventory) is Active Full Copy and the Swapped state (displayed on the Hitachi Block Replication Details (Storage)) is Yes.
    3. From the Hitachi Block Replication Details (Storage) click the Swap button.

      The Hitachi Block Replication Swap Wizard will appear warning that swapping can cause data loss, however as long as access to the secondary volumes has been stopped, it will be safe to proceed.
    4. Type the word 'SWAP' into the Confirm Swap field and click OK.

      The Jobs Inventory is displayed and a new job will appear indicating that a Swap VSP Replication operation is in progress. Click on the Job Type in the table to open the Job Details which list the log messages relating to the swap operation.
    5. Return to the Hitachi Block Replication Details (Storage) and review the replication's status:

      When the swap (takeback) is completed the Swapped state will be set to No, indicating that the replication is now normal (P-VOL to S-VOL) and is back in PAIR status. The Swapped status badge will disappear from above the replication's mover on the Monitor Details.
    6. Start any applications that access the primary volumes that have been taken back and resume production at the primary site.

How to expand a journal

Before you begin

It is assumed that you have one or more journals defined on the storage system and visible in the Journals inventory. (If the journal is not visible, then it may be necessary to manually refresh the inventory.)

If the journal is currently used by one or more replication pairs in PAIR/PAIR status, then it will be necessary to pause those replication pairs for the expansion to take full effect. Once expanded, the pairs can be resynchronized to PAIR/PAIR status, if desired.

Only journals composed of DP-VOLs can be resized by the product.

Procedure

  1. If necessary, pause any replications using the journal to be expanded.

  2. Locate the journal to be expanded, by navigating to one of the following locations:

  3. From any of these locations, click the Expand button. The Hitachi Block Journal Expansion Dialog will appear.

  4. Enter a new size for the journal into the New Journal Size field, and click OK.

  5. Go to the Jobs Inventory to ensure that a journal expansion job has been initiated, and wait for it to complete.

  6. If the journal expansion is unsuccessful, review the Logs Inventory to find out why. The journal expansion operation must be re-initiated by the user once the problem is resolved.

  7. If desired, resume any replications using the journal that was expanded.

How to adopt a replication into Protector

Before you begin

It is assumed that the following tasks have been performed:

  • The Protector Master software has been installed and licensed on a dedicated node. See Installation Tasks and License Tasks.
  • The Protector Client software has been installed on the nodes that will act as proxies for both primary and secondary Hitachi Block storage devices.
  • The primary and secondary storage devices have been set up as per the Protector requirements and prerequisites. Refer to Hitachi Block prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups. Refer to How to configure basic role based access control.
  • Read About Hitachi Block replication adoption to understand how adoption works, its prerequisites, limitations and behaviour.

This task describes the steps to follow when adopting a replication that has been set up on the underlying hardware, outside of Protector. For more information, refer to About Hitachi Block based backup technologies. The data flow and policy in this example are as follows:

Adopted TrueCopy Replication Data Flow
GUID-636D6B75-37E3-48AE-95FD-CFBF3D4F5A74-low.png
Hitachi Block Replication Policy
Classification TypeParameterValue
Hitachi BlockLogical Devices212418/100

212418/101

NoteIf you want to add source volumes to a replication policy after it has been adopted, then the Adopt existing replication option in the Hitachi Block Replication Configuration Wizard must remain selected when you subsequently reactivate the data flow with the modified policy settings.
Operation TypeParameterValueAssigned Nodes
ReplicateRun OptionsN/A

(TrueCopy is a continuous replication, so the Run option is ignored)

Primary Hitachi Block Device,

Secondary Hitachi Block Device

To adopt a replication perform the following steps:

Procedure

  1. Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.

    These nodes are used by Protector to orchestrate replication of the primary LDEV(s) to the secondary LDEV(s) and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM nodes. The ISM nodes do not appear in the data flow.
  2. Create new primary and secondary Hitachi Block Device nodes (unless ones already exists) using the Hitachi Block Device Node Wizard and check that they are authorized and online.

    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. See How to add a node and How to authorize a node. The primary and secondary Hitachi Block Device nodes appear in the replication data flow as the source and destination nodes.
  3. In the Policies Inventory, create a new policy. See How to create a policy

    1. Add an Hitachi Block classification, select Specify additional selections and specify the LDEV(s) or Host Group of the primary volume(s) in the Logical Devices field.

    2. Add a Replicate operation. Select Run on Schedule and define a suitable schedule if a batch replication is being adopted.

    3. Click Finish to create the policy.

  4. In the Data Flows Inventory, create the replication data flow, corresponding to the one you want to adopt. See How to create a data flow.

    1. Place the corresponding Hitachi Block source and destination nodes in the Data Flow workspace.

    2. Connect the two nodes using a Batch or Continuous mover, as appropriate to the replication type being adopted.

    3. Select the source node and assign the Hitachi Block-Replicate policy defined above.

    4. Select the destination node and assign the Replicate operation.

      The Hitachi Block Replication Configuration Wizard is displayed.
    5. Select the type of replication on the left hand side of the dialog and then the Adopt existing replication option on the right.

      NoteRefreshed Snapshot replications using Thin Image cannot by adopted.
      Only the fields required for identifying the adopted replication are enabled. Those that have been disabled will be populated automatically once the replication has been adopted. Refer to the table in About Hitachi Block replication adoption to understand how the policy and data flow attributes are interpreted during the adoption process.
    6. Enter the parameters required to identify the adopted replication on the hardware.

  5. Compile and activate the data flow, checking carefully that there are no errors or warnings.

    See How to activate a data flow.
  6. Locate the active data flow in the Monitor Inventory and open its Monitor Details.

    1. If you are adopting a batch replication, select the source node and click Trigger Operation.

      The Trigger Operation Dialog is displayed.
    2. Select the replication operation and click OK to trigger it.

  7. Go to the Logs Inventory, identify the corresponding session and open the Session Log Details by clicking GUID-A59E3959-C86D-4020-AAB0-E67127247523-low.png View Session to the left of one of the related message.

    If the adoption did not complete successfully, you will see one or more of the log messages listed in the table in About Hitachi Block replication adoption. Make the necessary changes to the data flow and/or policies, recompile and activate the rules and try again.
  8. Once the adoption process has completed successfully, go to the Hitachi Block Replication Details (Storage) for the corresponding replication and review the information to ensure that the desired replication has been adopted. See How to view the status of a Hitachi Block storage device.

    The secondary LDEVs are listed under Phase - Logical Devices. The Adopted attribute will be set to true. If the wrong replication was adopted because the wrong Mirror Unit number was specified, it can be changed in the Data Flow Wizard. Recompile and activate the rules and adopt the correct replication. The previously (erroneously) adopted replication will be left intact on the hardware but discarded by Protector.

How to dissociate a replication from Protector

An Hitachi block replication that has been defined within Ops Center Protector can be dissociated without it being removed from the underlying hardware.

Procedure

  1. Go to the Hitachi Block Replications Inventory or Hitachi Block Replication Details (Storage) and locate the adopted replication that you want to dissociate.

  2. Select the replication(s) and select Dissociate from the context menu.

  3. A warning dialog is displayed. If you are sure you want to proceed then type the word 'DISSOCIATE' then click OK.

    The replication entry is immediately removed from the list. However the dissociated data flow and policy definition will remain and must also be removed.
  4. Go to the Data Flows Inventory and delete the dissociated data flows (assuming they are not involved in other policies still being managed by Protector).

  5. Go to the Policies Inventory and delete the policies for the dissociated replications (assuming they are not involved in other policies still being managed by Protector).

 

  • Was this article helpful?