Skip to main content
Hitachi Vantara Knowledge

Data Flows

Data Flow Concepts

This section describes Ops Center Protector's data flow management features.

For further information, refer to:

About data flows

A Data Flow is a diagrammatic representation of the nodes involved in a data protection scenario where each node is represented by an icon. Data flow diagrams identify both physical and logical entities and the connections between them. The data that is to be protected flows from a Source Node to a Destination Node during the data protection process by way of a Mover; the direction of movement being indicated by an arrow on the connector between nodes. Data is transferred in scheduled batches indicated by a solid Batch mover. For host based backups, data transmitted across a network can be compressed to reduce bandwidth utilization and bandwidth throttling schedules can be applied to movers, to ensure that data protection activity does not degrade normal network performance.

Node Groups can be placed on data flows so that multiple nodes having common properties can be treated as one entity.

Each node in a data flow plays a part in implementing the data protection scenario by having a Policy assigned to it (see About policies). Once a data flow is constructed, it must be Activated before it becomes operational. An active data flow can be Deactivated to stop that data protection process.

NoteDeactivating a hardware storage dataflow marks replications within it as eligible for being torn down. The actual teardown process must be initiated by the user via the user interface. See About two-step teardown.
The process of compiling a data flow performs validity checks on the data flow and assigned policies, then generates a set of rules for each node in the data flow. The compiled rules are distributed to the affected nodes and activated; the participating nodes use these rules to act autonomously. The operation of a data flow can be monitored in real-time using the same data flow diagram rendered as a mimic display (see About monitoring).

Data flow topologies generally fall into one of the following groups (although combinations of these are also possible):

  • One-to-one - data from a single source is backed up to a single destination.
  • Many-to-one - data from multiple sources is backed up to a single destination.
  • One-to-many - data from a single source is backed up to multiple destinations.
  • Many-to-many - data from multiple sources is backed up to multiple destinations.
  • Cascaded - data from a source is backed up to one destination then forwarded on to a second destination.

About two-step teardown

Two-step teardown reduces the possibility of inadvertently tearing down Block Storage replications due to accidental deactivation of a dataflow, or activation of an erroneous dataflow.

With two-step teardown, on deactivation of a dataflow in which a replication operation appears, or reactivation of a data flow where a replication operation has been removed, the replication is now flagged as being eligible for teardown in the Restore and Storage inventories. When a replication operation is deactivated, the underlying replication on the hardware will continue to operate as normal, except that any further batch resynchronizations will not be scheduled.

The final teardown operation must be explicitly initiated by the user, via the Storage inventory.

If a user initiated teardown operation fails, it is not automatically retried, so needs to be re-initiated by the user. Teardown failure may occur in the case of GAD 3DC dataflows, where teardowns must be performed in a specific order. Prior to the introduction of two-step teardown, automatic retries were performed indefinitely until successful.

NoteA dataflow that is deactivated and then subsequently reactivated without the replication operation having been removed, re-instantiated or manually torn down before reactivation will, in effect, be re-adopted. This re-adoption removes the eligible for teardown flag from the replication, as if the dataflow had never been deactivated.

It is possible to disable two-step teardown on a per ISM basis via a configuration file, so different teardown policies can be implemented within the same environment.

About data flow implementation

Data flows drawn in the Protector UI are abstractions that hide the underlying hardware and software implementation. Before a data flow can be constructed, Protector Client software must be installed on the physical nodes connected to the storage devices, and the equivalent nodes must be created in the Nodes Inventory. The relationship between storage devices, Clients and nodes appearing in the Nodes Inventory is described in About nodes.

As an example of how a data flow is implemented, consider the backup of a file path on a server to a repository. In this case, the source and destination storage devices are file system volumes mounted on separate servers. Protector Client software must be installed on both servers. These Protector Clients are automatically detected by the Protector Master and appear in the Nodes Inventory as OS Hosts. The source OS Host can be used 'as-is' to identify the volume and file path in the backup policy. The Repository node is created via the UI; the destination OS Host node being selected as the proxy when configuring the node. When the rules for this data flow are activated, the source server will transmit the files identified in the policy over the network to the repository on the destination server. The figure below shows the data flow as it appears on the UI with the underlying implementation shown beneath.

Files on a server backed up to a repository
GUID-869A800E-C802-49E1-AD79-A82FAFA20700-low.png

A more complex example is a data flow representing application data, stored on a source block storage device, being replicated to a destination block storage device. The source block device has LDEVs mounted to an application server machine that are being used to store the application data. Protector Client software must be installed on the application server and on servers at the source and destination sites that are designated to control the block devices. The application server must have prerequisites installed to enable Protector to interact with the application software. The servers (ISMs) at the source and destination sites must have prerequisites installed to enable Protector to interact with the block devices. All three Protector Clients will be automatically detected by the Protector Master and appear in the Nodes Inventory as OS Hosts. Source and destination Block Device nodes are created via the UI; the source and destination OS Host nodes representing the ISMs being selected as proxies when configuring the nodes. When the rules for this data flow are activated, the source (VSP 1) will replicate the LDEVs identified in the policy over the data link to the destination (VSP 2). The figure below shows the data flow as it appears on the UI with the underlying implementation shown beneath. Note that the source Block Device node is required by Protector to control the replication, but does not appear on the data flow.

Block based application replication
GUID-01D0AF0A-2306-4E38-BE1E-322A3252923A-low.png

About many-to-one data flow topologies

Many-to-one topologies should be used only where absolutely necessary. These topologies allow data from multiple nodes to be stored on a single destination node. For example, two or more source nodes replicate to a common backup server.

Consider whether the source nodes should be treated as a cohesive node group rather than individually.

If the source nodes are not functionally related then consider drawing separate data flows for each one; this will allow rules activation/deactivation for each flow to be decoupled.

NoteMany-to-one policies preclude the use of the Mount operation (for proxy back-up or re-purpose) on a destination Hitachi Block device. A Mount operation is only valid when one replication is applied to the destination node.
Many-to-one data flow
GUID-58EA87D2-8E11-47A2-B9F5-FB112C3758A5-low.png

About one-to-many data flows

One-to-many topologies are one of the standard building blocks for creating complex data flows. These topologies allow data from one node to be stored on multiple destination nodes. For example, one source node replicates to a DR server and a repurposing server. A policy is applied to the source node and all destination nodes.

One-to-many data flow
GUID-99D6CBE5-4D8B-41E7-8107-7415609963EA-low.png

About cascading data flows

In a cascading topology, a source node sends data to a primary destination node and then on to a secondary destination node. The primary use case is to copy the data to an intermediary site and then onto a remote site. The secondary use case is for local repurposing, where multiple copies of the same data can be mounted to different servers and used by different sets of users. The primary and secondary destination nodes both store the data. By sending all data to local and remote sites, the disaster recovery options are improved. A policy is applied to the source node, the local and the remote destination nodes.

Cascading data flow
GUID-B3D14C1B-F24E-48F0-BD05-B43FBE5C3E10-low.png

About parallel versus serial data flows

Ops Center Protector is designed to allow data to be filtered according to defined policies, and for data to be either sent directly to a node or routed through one or more intermediate nodes.

Typically, a set of policies can be implemented by using parallel or serial data flows with different tradeoffs. Consider a scenario where the following backup policies are required:

  • The entire contents of a source machine need to be backed up to a destination machine for disaster recovery purposes.
  • Only databases from that same source machine need to be backed up to a different destination machine for testing purposes.
About the parallel data flow solution

For the parallel solution the source node has two policies assigned; one for the whole machine (replicating all attached logical devices to myVSP1) and one for the databases (replicating only a subset of those logical devices to myVSP2). Because there are two movers in parallel, data that is common to both policies must be transmitted by the source node twice. This could be considered wasteful as it doubles the computing load and network bandwidth usage. However:

  • This topology is more fault tolerant since one destination node does not depend on the other destination node to implement its policy.
  • For Host based data flows, very fine grain data classification filters, separate bandwidth throttling and/or scheduling windows can be applied to each branch of the data flow.
Parallel solution
GUID-99D6CBE5-4D8B-41E7-8107-7415609963EA-low.png
About the serial data flow solution

For the serial solution the same policies are applied as in the parallel solution, the difference is that the data going to the second destination node is routed through the first destination node. This reduces the data traffic on the source node since it only has to transmit it once. However the policy on the second destination node now requires the first destination to be running and available on the network.

Serial solution
GUID-B3D14C1B-F24E-48F0-BD05-B43FBE5C3E10-low.png

About best practices for drawing data flows

It is important to create data flows that are well formed, understandable, manageable and maintainable. To achieve these goals it is advisable to adhere to a number of guidelines when constructing them.

The Data Flow Wizard has an intentionally small workspace area. This is done to dissuade users from drawing overly complex data flows. It is possible to pan and zoom the workspace to aid working on lower resolution displays. This feature should not be used to create a large drawing area.

Data flows should always be drawn with data flowing in the reading direction appropriate for the locale. The general convention for such diagrams is left to right. Although top to bottom data flows can be drawn, mover labels will not be displayed in this format.

Position nodes on diagrams so that adequate space is left between them for the operation name, operation type and mover label to be drawn below the movers.

Make full use of the ability to name and describe nodes, node groups, policies, operations, data flows, movers and schedules. Use consistent naming conventions and descriptions that convey intent; this will greatly enhance understanding of the purpose of a data flow for other users and will aid ongoing maintenance. Devise and adhere to a common naming convention based on the guidelines described in About best practices for naming objects.

A central principle for determining which nodes and policies should be placed on each data flow is to consider the granularity with which you want to be able to activate and deactivate individual data protection policies. If you combine too many flows and policies into a single diagram, then it will not be possible to deactivate a single policy without deactivating many others. For this reason it is recommended that you:

  • Consider how you may want to activate and deactivate separate policies.
  • Aim, wherever possible, to have only one flow and one policy per diagram.
  • Only place related nodes(1) and policies on the same data flow. Note that a policy can contain multiple operations, and for hardware orchestration data flows, all operations on a PVOL should be contained within the same data flow.
  • Separate unrelated policies sharing the same data flow into separate diagrams.
  • Only use one-to-many topologies if all the participating nodes and policies are related.
  • Consider placing multiple instances of the same node on one diagram to improve presentation.
  • Use node groups for source nodes that share identical policies.
  • Use separate data flows in place of many-to-one topologies, unless absolutely necessary.

1. Related nodes and policies typically form part of the same business process.

About best practices for naming objects

Providing meaningful names for the objects you create in Protector is one of the best ways to ensure that your data protection strategy is understandable, manageable and maintainable. The following guidelines will help you achieve this:

  • Adopt a naming convention that results in similar concepts being listed together when sorted alphabetically. For example VSPCorporate and VSPUKOffice are preferable to CorpVSP and UKOfficeVSP.
  • In general, Protector names should use nouns to describe objects and verbs to describe operations that are specific to the business processes in which they are used rather than Protector terminology.
  • Names should be unique enough so as to avoid misunderstanding and confusion.

    For example: Site1Broadstone and Site11London are preferable to Site1 and Site11.

  • Node names should describe the node's role in the organization and possibly its location. Avoid describing its type or other generic attributes that can be determined from the Node Details.

    For example: VSPTexasSalesCorp is preferable to PrimaryVSPG400.

  • Node Group names should describe the node group's role in the organization and possibly its location. Avoid describing its type or other generic attributes that can be determined from the Node Group Details.

    For example: ClusterDbLegal is preferable to OracleRAC.

  • Policy names should describe what the policy does, specifically in the context of the organization's data protection strategy. Avoid describing its classifications, filters, operations, scheduling or other generic attributes that can be determined from the Policy Details.

    For example: OracleDRMajorHeadOffice is preferable to UniversalRepOraclePolicy.

  • Operation names should describe what the operation does, specifically within the context of the policy in which it is used. Avoid describing its type, scheduling or other generic attributes that can be determined from the Policy Operation Details.

    For example: ReplicateLocal and ReplicateOffsite are preferable to Replicate1 and Replicate2.

  • Data Flow names should describe the data protection policy that is being implemented, specifically within the context of the organization's data protection strategy. Avoid describing attributes that can be determined from the Data Flows Inventory, Data Flow Details or Monitor Details.

    For example: OracleFailOverToSubOffice is preferable to ReplicateOracleDataFlow.

  • Mover labels should describe the purpose of the mover and/or the association between the nodes it connects. Avoid describing the operation name or operation type since these already appear in the Data Flow Details.

    For example: Fine Grain Protection is preferable to Weekly backup.

  • Schedule names should describe the purpose of the schedule. If the schedule is related to a specific policy then name it after that policy. Avoid describing its type or other attributes that can be determined from the Schedule Details.

    For example: BackupWindowOvernight is preferable to Weekdays10PMTo6AM.

NoteThe naming conventions used elsewhere in this guide are designed to aid understanding of Protector concepts, tasks and workflows. They do not represent good naming conventions when using the product in a production environment.

About Repository and HCP based backups

Repository-based backups are created by replicating data to a disk-based store. Restore point snapshots are created in the repository as required. The repository snapshot is assigned a retention period and is available for full or partial restore until it is retired ( deleted from the store). A Generation 1 repository store contents, created using a batch mover, can also be tiered to HCP for long term retention. For Generation 2 HCP, there is no need for a repository between the source node and HCP, backups can go straight from the source node to HCP. However, tiering is not supported on Generation 2 HCP.

The concept of Generation 1 and Generation 2 HCP nodes applies to Protector only and differentiates between an older HCP interface implementation and the current one. Generation 1 HCP is deprecated and should not be used for new configurations. In addition, you cannot upgrade Generation 1 implementations to Generation 2. There is difference between a backups to Generation 2 HCP and to HCP Cloudscale.

The concept of Generation 1 and Generation 2 repositories is also a Protector concept. Again. Generation 1 repositories are deprecated and should not be used for new configurations and cannot be upgraded to Generation 2 repositories. Generation 2 repositories have many advantages over Generation 1 repositories especially around parallelism.

About Repository based batch backup

The repository is updated periodically using scheduled resynchronizations. This method involves a scan of the source machine’s file system. With batch resynchronization, the changed blocks are transferred. By default, the block size that is transferred is 2 MB; it is 16 KB if fine change detection is configured in the store template for Gen 1 repositories or in the Policy definition for Gen 2 repositories. An entire file is transferred if it is less than one block. Batch backup is useful for data that does not change often, such as data contained on the operating system disk.

Fine change detection should be used with the batch mover if there are frequent, widely distributed changes to files. Typically however, content on file servers does not benefit from fine change detection, because Microsoft Office documents are automatically compressed by the applications. This means that the smallest of modification causes a completely different file to be created, negating any benefit from detecting changed blocks.

Host based granular file I/O data capture

The data capture system is highly granular, with the option to tightly define the type of data that is transferred within the policy classification. Unlike systems that replicate all contents on the volume, Ops Center Protector's Host Based data protection technologies replicate only specified data, thereby saving bandwidth and storage space.

About Repository based source side deduplication

Source side deduplication is a mechanism within Protector that improves network bandwidth utilization by avoiding sending the same data multiple times.

In simple terms, as soon a block on the source machine is detected as changed during post-scan in a batch policy, those blocks are transferred to the repository. Source side deduplication uses Single Instance Store (SIS), a one-by-one method of scanning/transferring data from source to repository during initial synchronization and subsequent resynchronizations. If, for example, twenty 5TB source nodes (each having much of the data duplicated across them) are backed up, the repository will ingest data from the first source node, go to the second source node and compare its data with what is already ingested, then transfer only non-duplicate data. This process is repeated with the remaining source nodes.

Even if some duplicate files get through, with source side dedupe enabled, SIS post processing will remove those duplicates. SIS post processing is applied to the dataset once per day during repository cleanup.

The changed block transfer method is far more efficient and has less impact than traditional backup systems.

About Repository to Repository backup

NoteRepository to Repository backups are only supported within the same generation of repository i.e. Generation 1 to Generation 1 or Generation 2 to Generation 2.

Ops Center Protector has the ability to replicate data being sent to an on-site repository, to an off-site repository without needing to gather the data from the source machine(s) again. This minimises the load on the source machines and quickly and efficiently transfers the data off-site.

Using Smart Repository Sync, the secondary repository creates backups from the primary repository by taking the incremental changes required to create a new snapshot; the source node is not involved in this process. The secondary repository has the following capabilities:

  • The user selects which backups are sent to the secondary repository. You may choose to only send a subset of the primary.
  • You do not have to use the same schedule. You may for instance back up to the primary repository every hour but to the secondary once a day.
  • You can have different retention. You may for example keep backups on the first repository for a week and on the secondary repository for 6 months.

Repository to repository backups should be scheduled such that the secondary (off-site) repository takes the latest completed backup from the primary (on-site) repository. The secondary repository backup is therefore scheduled to run on completion of the primary's backup.

When the policy is first triggered, the on-site repository will be resynchronised with the source. The empty off-site repository will then be synchronised with the on-site repository. Depending the amount of backup data and the bandwidth of the network between the on-site and off-site repositories, this initial synchronisation process can take a considerable time (many hours) to complete. To overcome this, a technique called 'repository seeding' (see How to seed an offsite repository) may be used for efficiently setting up an off-site data store, reducing the time and bandwidth required to load the initial backup into the secondary repository. Once seeded, the amount of data transferred between the on-site and off-site repositories is much reduced and dependent only on the data change rate of the source machine(s).

NoteIf the on-site repository contains large amounts of non-critical or legacy data that does not require additional off-site protection, then it is recommended that you review your local backup policies and repository architecture prior to replicating data to the off-site repository. This will allow you to identify and replicate only your critical data to the off-site location.

About tiering Gen1 Repositories to HCP

CautionData that is tiered from an encrypted repository will not be encrypted on HCP. The use of encrypted repositories for tiering is not recommended.
NoteTiering file system data to HCP is backwards compatible with Protector 5.x. However the following features are not yet supported in Protector:
  • Tiering from a live data store (only batch data stores are supported).
  • Stubbing and removing data from the source.
  • Setting retention on HCP objects.
Setting DPL (Data Protection Level) on HCP namespaces is no longer done through Protector. This is now done at the tenant level via HCP's Tenant Management Console.

File system data from any supported OS, that is backed up to a Protector repository using a batch mover, can be tiered to HCP cloud storage platform. A repository store is tiered to an associated HCP namespace. Each tiered repository stream (a file can consist of one or more streams) is saved as an HCP object within its related namespace, along with metadata that enables it to be restored back to the original source node. If a previously tiered repository stream is modified, then the entire stream is tiered to HCP again. Once a stream is tiered to HCP it is removed from the repository.

Repository ingestion rate throttling helps to constrain repository data store size growth. If a repository's tiering queue gets too large, the repository will stop receiving data from the source until the tiering queue length reduces. High and low watermarks control the growth of a repository, allowing newly ingested files to occupy the space that tiered files previously occupied. By default up to 50 streams can be tiered concurrently to HCP, with another 50 queued awaiting tiering, before repository ingestion is paused. If performance outweighs repository growth in your environment, please contact Protector product support to adjust the throttling behaviour.

Protector does not set a retention on HCP objects. When a tiered stream is no longer referenced by any repository snapshots, the corresponding object will be deleted from HCP. A repository must be present in an active data flow and configured to tier to the HCP to be able to delete objects. When a repository store is deleted, the corresponding namespace for that store will be deleted from HCP.

Deleting a repository node will not remove the repository streams or the data tiered to HCP. To remove all Protector's data tiered to HCP after deleting a repository node, use the HCPDM utility provided by HCP.

Tip

Protector repository stores are mapped to HCP namespaces, with the UUID of the Protector store being used as the HCP namespace name.

NTFS files typically consist of at least two streams containing data and security information respectively. Each repository stream is mapped to an HCP object, with an incremental hexadecimal number being assigned by Protector as the object name. Objects are stored in HCP in their native format, so it possible to view them in HCP. Object naming is designed to distribute the load optimally across HCP cluster nodes.

Repository stream objects are described by an HCP content class to enable them to be indexed and subsequently searched using the HDIM Content Class and its associated properties in an HCP Structured Query. Notice the use of the legacy product name HDIM here for backward compatibility; Protector was once named HDIM.

Protector communicates with HCP using the native REST API over HTTPS by default. HTTP can also be used to increase tiering speed if performance needs outweigh security.

About Hitachi Block based backup technologies

Ops Center Protector supports the following Hitachi Block based snapshot and replication technologies, allowing data flows to be constructed graphically without the user needing to construct HORCM files:

  • Thin Image (TI)
  • ShadowImage (SI)
  • TrueCopy (TC)
  • Universal Replicator (UR)
  • Global-Active Device (GAD)

These technologies can be combined in numerous ways to create complex, block based, data protection and repurposing scenarios.

Ops Center Protector is used to create these data protection policies and manage them after replications and snapshots are created. It offers the ability to view the current state of replications and control their activities. It fully manages the lifecycle of snapshots, keeping an index of their existence and removing them from the system once the user designated retention has been reached.

TipWhen Protector creates fully provisioned snapshots and replications, it does so in S-VOLs mapped into one or more host groups that it creates on a port, so that these S-VOLs can have LUNs assigned. For GAD replications, the user is additionally able to specify one or more host groups that will have a LUN assigned for the S-VOL.

CautionAnything created by or imported into Protector is managed by Protector and should not be modified or deleted without doing so through Protector. If a replication is removed in Protector then it will be removed from the hardware storage device too.

About mover types used with Hitachi Block operations

When constructing a data flow containing Hitachi Block based storage devices, it is necessary to use the correct combination of mover type (Batch or Live/Continuous) in conjunction with a given snapshot or replication technology (Thin Image, Refreshed Thin Image, ShadowImage, TrueCopy, Universal Replicator or Global-Active Device).

The term Differential Snapshot means that a new target volume(s) is created each time the operation is triggered. The target volume(s) remain suspended after creation. The target records the deltas so the source is also required to reconstruct the full data set.

The term Refreshed Snapshot means that a new static target volume(s) is created the first time the operation is triggered. The same target volume(s) is refreshed with new deltas from the source on subsequent triggers. The target volume(s) remain suspended after creation. The target records the deltas and thus the source is also required to reconstruct the full data set.

The term Batch Replication means a static target volume(s) is created the first time the operation is triggered. After initial synchronization, the replication is suspended. Subsequent triggers result in a resynchronization from the source, after which the target volume(s) are suspended.

The term Live Replication means a static target volume(s) is created the first time the operation is triggered. The source and target volume(s) are continuously kept in sync (paired) either by copy-on-write (COW) or copy-after-write CAW) data transfer mechanisms. The replication pairs can be paused and resumed as required.

The following table lists typical block based data flow scenarios along with the move types that can be used.

Hitachi Block based scenarios and associated mover types
ScenarioMoverDescription
Thin Image Snapshot (TI)BatchAbout Thin Image differential and refreshed snapshots
ShadowImage Replication (SI)Batch or LiveAbout ShadowImage replication
TrueCopy Replication (TC)LiveAbout TrueCopy replication
Universal Replicator (UR)LiveAbout Universal Replicator
Global-Active Device (GAD)LiveAbout Global-Active Device replication
Three Data Centre Cascade (TC+UR)

Live TC

+ Live UR

About three datacentre cascade (3DC)
Three Data Centre Multi Target (TC+UR)

Live TC

+ Live UR

About three datacentre multi-target
Clones of a Clone (SI+SI)

Batch SI(1)

+ Batch SI

About static clones of a clone
Clone with Replication (SI+GAD)

Batch SI(1)

+ Live GAD

About clone with replication
Snapshot with Replication (TI+GAD)

Batch TI

+ Live GAD

About snapshot with replication
Snapshot of a Clone (SI+TI)

Batch SI(1)

+ Batch TI

About snapshot of a clone
Replication of a Clone (SI+TC or SI+UR)

Batch SI(2)

+ Batch TC or UR

About replication of a clone
Remote Snapshot (TC+TI or UR+TI)

Live TC or UR

+ Batch TI

About remote snapshot
Local and Remote Snapshots (TC+TI/TI, UR+TI/TI or GAD+TI/TI)

Live TC, UR or GAD

+ Batch TI

About local and remote snapshots
Remote Clone (TC+SI or UR+SI or GAD+SI)

Live TC, UR or GAD

Batch SI(1)

About remote clone
Local and Remote Clones (TC+SI/SI or UR+SI/SI or GAD+SI/SI)

Live TC, UR or GAD

Batch SI(1)

About local & remote clones
Local Snapshot and Remote Clones (TC+TI/SI or UR+TI/SI or GAD+TI/SI)

Live TC, UR or GAD

+ Batch TI

+ Batch SI(1)

About local snapshot and remote clones
Remote Snapshot and Local Clones (TC+SI/TI or UR+SI/TI or GAD+SI/TI)

Live TC, UR or GAD

+ Batch SI(1)

+ Batch TI

About local snapshot and remote clones (reversed)

(1) Continuous SI can also be used in these topologies to vary the use case.

(2) Continuous SI cannot be used in these topologies since it is not possible to chain a remote replication from Continuous SI.

About Thin Image differential and refreshed snapshots

Differential and Refreshed Thin Image Snapshot
GUID-EB60BDFE-B89E-4722-9100-D9C304CEA748-low.png

Thin Image enables rapid creation of in-system, space efficient, read/write, volume-consistent snapshots and subsequent rollback of entire volumes.

Snapshots of a volume in the storage system are stored in a dedicated area called the Thin Image pool. For floating snapshots, where no LDEV is assigned until it is mounted, no data movement occurs and thus creation of a snapshot is near instantaneous. For non-floating snapshots, the auxiliary tasks of creating an LDEV and a LUN will take some time.

Once a snapshot is created, subsequent updates to the primary data causes the storage system to move the data blocks being updated to the TI pool. TI pool usage thus increases as the primary data changes and snapshots are retained.

Caution

Filling a Thin Image pool to capacity will invalidate all snapshot data contained within that pool. All snapshots in the pool will have to be deleted before snapshotting can be resumed.

When accessing a snapshot, the storage system presents the virtualized contents by merging the primary volume with its differentials held in the TI pool.

When deleting a snapshot, the storage system releases the differentials held in the TI pool. When multiple snapshots are involved, the delete operation may take some time. The differentials are reference counted and will only be deleted when no remaining snapshot requires them.

NoteOnce snapshots have been released back into the pool, that space can only be reused by the same primary volume. This hardware limitation means that this space cannot be used by a different volume. The only way to completely free the space to the pool for any volume is to delete all the snapshots on that primary volume.

When handling multiple primary volumes, the storage system takes a snapshot of each volume sequentially. This means that a slight difference can be seen across the snapshot timestamps. If exactly the same timestamp is required among the snapshot set (for crash-consistent backup), the snapshot set should be created using Consistency Groups (CTGs).

When taking a snapshot, there are two options for the target volumes:

  • Differential Snapshot: Creates a new snapshot for each backup and deletes it when the retention time expires. This simplifies the management of numerous snapshots and is suitable for backup operations. Data Retention Utility (DRU) protection can be applied to the snapshot's LDEV so that it can be used as a read-only volume or to protect it against both read and write operations.
  • Refreshed Snapshot: Creates a new snapshot for the first backup, and then resynchronizes it on the following backups. This enables static target volumes (Port, Host Group and LUN, although the LUN may not remain constant depending on the mount host OS) and is suitable for repurpose operations.

Floating Device is an improved snapshot capability, used in conjunction with Thin Image, that simplifies snapshot management. With this capability snapshots can be created without creating target volumes upfront. This means that the limit on the number of snapshots in the entire storage system is increased (the number of snapshots of a specific primary volume is 1024). To revert the snapshot it is only necessary to select the required timestamp. To mount the snapshot for re-purposing, it must be mapped to a specific LDEV/LUN. After the snapshot is un-mounted, the volumes will be deleted by Protector as part of the unmount process.

Note
  • When using a snapshot, intensive read/write access to the snapshot may impact the performance of the primary volume, due to the way the snapshot volume is virtualised. If this is of concern then consider using ShadowImage or ShadowImage-Thin Image in cascade, where the Thin Image primary volume becomes the ShadowImage secondary volume instead of the original source.
  • Long-term retention increases the number of differentials held in the TI pool once writes are distributed across all data blocks. Thus, for long-term backups, it is recommended to use ShadowImage.
  • Thin Image requires the primary data in order to present a virtualised snapshot volume. This means that snapshots will be lost if a disk failure occurs on the primary volumes. To protect the data from such hardware failure use ShadowImage to create a clone and take snapshots of the clone instead.
About cascaded Thin Image snapshots

Thin Image snapshots of a P-VOL can be cascaded, the first layer being referred to as L1 S-VOLs. Cascading can be recursive so as to form a snapshot tree to a depth of up to 64 layers (L64 S-VOLs), consisting of the root P-VOL, intermediate node S-VOLs and terminal leaf S-VOLs. The total number of S-VOLs in a tree is limited to 1024.

To create cascadable snapshots, the L1 snapshot volumes must be created in cascade mode as a floating device or fully provisioned. Cascade mode snapshots must be provisioned, either at creation or mount time from a dynamic pool. From Protector 6.5 onwards, L1 snapshots are created in cascade mode by default, and are dynamically provisioned, although standard mode can still be specified if the storage device does not support cascading.

Protector uses a number different of pools for cascade mode snapshots as follows:

  • Snapshot Pool - a Thin Image or hybrid pool where the P-VOL/L1 and L1/L2 snapshot pair data is held. If a hybrid pool is specified then Protector may also create the snapshot S-VOLs here.
  • Cascade Pool - a dynamic or hybrid pool where Protector creates snapshot S-VOLs if they are fully provisioned.
  • Mount Pool - a dynamic pool where Protector creates the snapshot S-VOLs if the Snapshot Pool is a Thin Image pool or if a floating device was specified for the snapshot operation. If a Mount Pool is specified as an option then it will be used in preference.

Both standard and cascade mode snapshots require a Snapshot Pool to be specified regardless of the mode. If fully provisioned cascade mode is selected then a Cascade Pool must be specified when configuring the operation.

When mounting a cascade mode snapshot, Protector provides the option to mount the original (L1) or a duplicate (L2) snapshot. The duplicate (L2) snapshot can be modified without changing the original (L1) snapshot's data. When the duplicate (L2) snapshot is unmounted, it is deleted and any changes made to it are lost. Original and duplicate mount modes for cascade mode snapshots may or may not require a Mount Pool to be specified, as per the following table:

Snapshot Pool TypeProvisioning TypeMount ModeSpecify Mount Pool?
Thin ImageFloating DeviceOriginal (L1)Required
Thin ImageFloating DeviceDuplicate (L2)Required
Thin ImageFully ProvisionedOriginal (L1)N/A
Thin ImageFully ProvisionedDuplicate (L2)Optional
HybridFloating DeviceOriginal (L1)Optional
HybridFloating DeviceDuplicate (L2)Optional
HybridFully ProvisionedOriginal (L1)N/A
HybridFully ProvisionedDuplicate (L2)Optional

About ShadowImage replication

Full clone using batch mode ShadowImage
GUID-429ED6D7-FE80-491F-AD69-D32BB027973B-low.png

ShadowImage enables the creation of in-system, RAID-protected, read/write, volume-consistent, full clones.

As with TI snapshots, consistent clones can be created using Consistency Groups (CTGs).

NoteShadowImage has a limitation on the maximum number of clones that can be created at one time. There can be up to three 1st level (L1) clones and then two L2 clones per L1 clone, giving a potential total of six L2 clones. Including the L1 clones, the potential total is nine clones. If more copies are required beyond this then use Refreshed Thin Image snapshots.

When taking a clone of a primary volume the storage system copies all of the data to the secondary volume. The point at which this is done depends on the split type selected:

  • Quick Split - copying from primary to secondary is performed in the background so that the secondary is immediately available for reading/writing. The performance of the primary may be affected if access the secondary references data that has not yet been copied from the primary. In this case, on-demand copying of that data from the primary is required.
  • Steady Split - copying from primary to secondary is performed in the foreground before the secondary is made available for reading/writing. The creation of the secondary takes time depending on volume size.

If using Dynamic Provisioning (DP) volumes for both primary and secondary volumes, the copy is applied only for the allocated area; the unallocated area is ignored.

Once the clone is created, the storage system updates the bitmap for the primary, which records which blocks have been modified. Pair resynchronization can be performed in one of the following ways:

  • Quick Resync - resynchronization is performed in the background and on-demand. The secondary is briefly made read only (for less than 1 second), after which it becomes available for reading/writing (i.e it enters the PAIR state in less than 1 second). The performance of the primary may be affected if access to the secondary requires on-demand resyncing from the primary.
  • Normal Copy - the secondary is made unavailable while the resynchronization is performed. The resync takes time depending on the size of differentials between the primary and secondary.

When the secondary is accessed, behaviour depends on the mode of operation as follows:

  • Steady Split and Normal Copy - the storage system presents the actual contents of the secondary volume. This is in contrast to TI snapshots, where a merging process is required between the primary and secondary volumes to reconstruct the data.
  • Quick Split and Quick Resync - the storage system presents the actual contents of the secondary volume. However a merging process may be required between the primary and secondary volumes to reconstruct the accessed block of data, if the background copy of that block has not yet been performed.

The following table shows how Quick Split and Quick Resync (indicated by the suffix q) are affected by upstream and downstream operations in an SI data flow:

Data FlowBehaviour
SIq

The SI is performed using quick operations.

The SI secondary is immediately available for manually mounting.

SIq with auto-mount of secondary

The SI is performed using quick operations.

The SI secondary is auto-mounted immediately.

SIq with downstream replications/snapshots

SI is performed using quick operations. However:

  • The downstream replications/snapshots will wait until the SI secondary has been completely copied.
  • The SI secondary will only be available for manually mounting once it is completely copied.

See note below.

SIq with auto-mount and downstream replications/snapshots

SI is performed using quick operations. However:

  • The downstream replication/snapshot will wait until the SI secondary has been completely copied.
  • The SI secondary will only be auto-mounted once it is completely copied.

See note below.

Upstream replications with downstream SIq

SI is performed using quick operations.

The SI secondary is immediately available for manually mounting.

There is no impact on upstream replications.

NoteIn cases where the SI replication must be fully evaluated, Quick Resync and Quick Split will take as long as Normal Copy and Steady Split. However the use of Quick Resync will have a beneficial affect on when and for how long the production application is quiesced.

When the clone is deleted the storage system releases the bitmap.

Protector supports Steady Split/Normal Copy and Quick Split/Quick Resync.

Full clone using continuous ShadowImage with protected, isolated, multiple TI snapshots
GUID-10FACFEA-328D-407D-B157-64DDD939A67D-low.png

Continuous ShadowImage can be used to:

  • Protect access to local TI snapshots if the production volumes fail.
  • Isolate production volumes from performance impacts caused by heavy I/O on local TI snapshots.
  • Allow multiple scheduled mount operations (beyond the limits imposed by SI mirror counts) without affecting the original backup, through the use of Refreshed TI.
Note

Continuous SI can be combined with all hardware operations (i.e. TI, RTI, SI, TC, UR or GAD), with the exception that a continuous SI S-VOL cannot also be the P-VOL of a remote replication (i.e. TC, Universal Replicator or GAD).

i.e. It is not possible to chain a remote replication from a continuous SI target.

The typical use cases for continuous ShadowImage include:

  • Repurpose on Demand - using continuous SI, keeps a close copy of the primary volume and allows pause and mount for repurposing.
  • Protected Backup - using continuous SI to TI snapshots, retains snapshots in the event that the primary volume fails.
  • DRU Protected Backup - using continuous SI to TI snapshots with DRU, retains snapshots with DRU lock in the event that the primary volume fails.
  • Repurposing (TI) - using continuous SI to RTI snapshots, provides multiple repurposing copies, possibly in excess of the SI limit.
  • Repurposing (SI) - using continuous SI to batch SI, provides a repurposing copy.
  • Repurposing (SI) with Backup - using continuous SI to batch SI to TI snapshots, provides a repurposing copy with snapshots for protection.
  • Repurposing (SI) with DRU Backup - using continuous SI to batch SI to TI snapshots with DRU, provides a repurposing copy with snapshots for protection with DRU lock.

About TrueCopy replication

TrueCopy Replication
GUID-BFE87606-8C55-4AA4-BE0A-B2BC030C3C04-low.png

TrueCopy provides remote, volume consistent, synchronous replication.

When establishing a replication between primary and secondary volumes, the storage system copies all the data from the primary to the secondary volume. Depending on the volume size, the creation of replicas takes time. As with ShadowImage, data copy can be optimized by using DP volumes.

After creation of the replicas, the storage system maintains the replica on the secondary volume by synchronously transferring each write made to the primary. In synchronous (copy on write) mode, the storage system signals write I/O completion only when it has been transferred to the secondary volume. The write order is completely guaranteed so that the secondary volume is crash-consistent at any point in time.

In the COPY state, no read/write operation is permitted to the secondary volume. To access the secondary volume, the replication must be paused (i.e. the pair is placed in the suspended state). As with ShadowImage, a bitmap is maintained for later pair synchronization. When the replication is deleted the storage system releases the bitmap.

Note
  • Fence Level determines behaviour when an update to the secondary volume fails. This option should be set based on the business priority (i.e. keeping replications consistent versus keeping production data available):
    • Data – prevents writes to the primary volume if updates to the secondary volume fail. This setting is appropriate for volumes that are critical to recovery.
    • Status – prevents writes to the primary volume if the secondary volume’s status cannot be set to ‘suspended’ in the event of a failure. This setting enables rapid resynchronisation once a failure is resolved.
    • Never – allows continued writes to the primary volume even if updates to the secondary volume fails. This setting is appropriate for volumes that must remain available.
  • To maintain synchronised data transfer, sufficient bandwidth must be provided for the remote link, otherwise performance problems may be encountered on the production volume. When a replication is required over remote links with poor bandwidth or over long distances, use Universal Replicator.

About Universal Replicator

Universal Replicator
GUID-4B036048-B764-4FC4-8CCF-A8F8C81A77E6-low.png

Universal Replicator performs volume consistent, asynchronous replication.

When establishing a replication to a target secondary volume, the storage system copies all the data to the secondary volume. Depending on the volume size, the creation of the replica takes time. As with TrueCopy, the copying can be optimized using DP volumes.

After the replication is created, the storage system maintains the replica on the secondary volume, transferring each write to the secondary volume. In asynchronous mode, the storage system signals each write completion as soon as it is performed on the primary volume, it then transfers it to the secondary volume (copy after write). Journalling ensures the write order is completely guaranteed so that the secondary volume is crash-recoverable at any point in time.

In the replicating state, no read/write operations are permitted to the secondary volume. To access to the secondary volume, the replication must be paused (placed in the suspended state). Universal Replicator maintains a journal of the primary volume for later pair synchronization. In contrast to TrueCopy, long duration pausing can be tolerated by configuring sufficiently large UR journals. When the replication is deleted the storage system releases the UR journals.

The use of Consistency Groups (CTGs) is mandatory for Universal Replicator.

NoteLong duration pausing, with sufficiently large UR journals, may cause service level violation with increased RPO. To avoid this, RPOs must be monitored to ensure that they are satisfying the SLA.

About Global-Active Device replication

Global-Active Device Replication
GUID-04A9B1F1-D6D3-4656-831D-F9D81C0C36D9-low.png

Global-Active Device allows volume consistent, remote, active-active replication.

When establishing a replication, the storage system copies all the data to the secondary volume. Depending on the volume size, the creation of the replica takes time. As with TrueCopy, you can optimize the process using DP volumes.

After creating the replication, the storage system maintains the replica on the secondary volume. The copy mechanism and data consistency is the same as for TrueCopy.

Unlike TrueCopy, read/write operations are permitted on both the primary and secondary volume even in the replicating state, hence both sides of the replication pair are said to be active. All updates to the secondary volume are also transferred back to the primary volume. When the replication is paused (i.e. placed in the suspended state), the storage system determines the owner volume using the quorum disk and prohibits any read/write access to the non-owner volume.

As with TrueCopy, a bitmap is maintained for pair re-synchronization. When the replication is deleted, the storage system releases the bitmap.

NoteTo fully handle a failure scenario, it is recommended that the quorum disk is located at a secure third site.

Protector is not be able to operate any dataflow containing a GAD replication if the primary and secondary have been swapped externally, since path resolution will see the original secondaries as the new primaries. The swap must be reversed externally to its normal direction.

About Global-Active Device Cross Path
Fully Redundant GAD Cross-Path and Multi-Path Scenario
GUID-CDF2C6C9-E81C-4E20-AD40-E14AA6CA1DA0-low.png

In a GAD cross-path and multi-path environment, the application servers may have one or more LUN paths to the PVOL, and also one or more LUN paths to the S-VOL. The GAD pair may also have one or more LUN paths between them.

Protector is capable of configuring and adopting hardware path, file system path and application replications for GAD cross-path and multi-path scenarios.

In situations where the Application Host or OS Host has a LUN path to a GAD P-VOL and also has a LUN path to the S-VOL of that replication, Protector will resolve application paths including that P-VOL when the host has one or more LUN paths to both volumes involved in the replication.

Protector can:

  • Adopt the replication
  • Re-evaluate the replication if it already exists in Protector
  • Create a snapshot of the replication's P-VOL
  • Swap the replication

This feature requires raidscan (version 01-41-03/03 or later), which, when issued against the secondary volumes of a remote replication, includes the serial number of the array hosting the primary volumes.

This feature only considers cross paths to the primary application. Failover support for secondary applications is not supported.

About three datacentre cascade (3DC)

Three datacentre cascade
GUID-7CB761DF-AD4D-4A1B-894A-5B823FFDD8B9-low.png

Three datacentre (3DC) cascade using TrueCopy and Universal Replicator provides the maximum level of data protection by combining synchronous replication between the primary and local secondary site, cascaded with asynchronous replication between the local secondary and remote tertiary site.

If a component failure or power failure occurs at the primary site, production can be handed over to the local site with no data loss.

If a site failure or localised natural disaster occurs, both the primary and local site may be lost. In this case production can be handed over to the remote site with minimal data loss.

Consider the following when using 3DC cascade:

  • Due to the cascading topology, a failure at the local site will prevent both the synchronous asynchronous replications, leaving the primary site unprotected. To avoid this situation, it is recommended to use the 3DC multi-target configuration instead.

About three datacentre multi-target

Three datacentre multi-target
GUID-A7C84DA2-ECD1-49A4-AB8C-8235DEBC22A3-low.png

Three datacentre (3DC) multi-target using Global-Active Device, TrueCopy and Universal Replicator provides the same level of data protection, equivalent to 3DC cascade, but improves upon it by solving the issue of local secondary site failure leaving the primary site unprotected. If the local secondary site fails, replication between the primary and remote secondary site can continue uninterrupted.

3DC multi-target may also be configured using Universal Replicator instead of TrueCopy. Symmetric configuration and operation simplifies component failure and site failure since both are handled with a single set of operations. Consider this option if simplicity of operation is the first priority.

About three datacentre multi-target with delta

Three datacentre multi-target with delta
GUID-CED65591-F329-4DF0-8E42-589F3B401A0C-low.png

Three datacentre (3DC) multi-target with delta is an improvement on 3DC multi-target that provides on-going protection even in the event of a failure at the primary site.

A replication from the production site to the local secondary site is configured using Universal Replicator. A replication from the production site to the remote secondary site is configured using Global-Active Device or TrueCopy. Additionally, a suspended, asynchronous Universal Replicator (delta-UR) replication is established between the local and remote secondary sites. The local and remote secondary sites will be near identical once pairing with the primary site is complete. Differences that appear between the secondary sites over time, due to a number of factors, are tracked by the suspended delta-UR replication.

Failure of the local or remote secondary site is handled in the same way as for 3DC multi-target, in that the primary site remains protected by the surviving secondary site.

In the event that the production site fails, the local secondary site can take over. The local secondary site is then rapidly brought under the protection of the remote secondary site by resuming the suspended delta-UR replication. Because only the deltas between the local and remote secondary sites need to be resynchronized, this pairing takes only a short time to achieve, meaning that the local site only remains unprotected for a brief period. Without the pre-existing, suspended delta UR between the local and remote secondary sites, it could take hours or even days to establish a replication between the secondary sites, leaving the local secondary site vulnerable for an extended period while this takes place.

About static clones of a clone

Static clones of a clone
GUID-CA299E42-D0A4-4F2C-83CC-928F1C0E3FBD-low.png

Static clones of a clone enables multiple repurposing with heavy workloads (read/write I/O).

A 1st level clone (golden image) is created based on a schedule. The golden image can then be manually replicated as 2nd level clones for repurposing. The 2nd level of clones need to be static because the re-purpose servers require static access points (Port, Host Group and LUN although the LUN may not remain constant, depending on the mount host OS) for mounting the volumes.

If re-purposing beyond the ShadowImage limitation of two 2nd level clones is required, use Thin Image on the 2nd level.

About clone with replication

Clone with replication
GUID-BB6D0EAA-8836-443E-9C2A-F137E7679FC7-low.png

Clone with replication enables disaster recovery and local backup and/or local re-purposing.

A replica of the production data is maintained on the remote site using TrueCopy, Universal Replicator or Global-Active Device. The remote replication is used for disaster recovery.

Local clones are created based on a schedule using ShadowImage. These clones can be used for fast operational recovery or repurposing.

About snapshot with replication

Snapshot with replication
GUID-2861F3A5-49BC-4458-8B70-58E60F1E9B44-low.png

Snapshot with replication enables disaster recovery and local backup.

A replica of the production data is maintained on the remote site using TrueCopy, Universal Replicator or Global-Active Device. The remote replication is used for of disaster recovery.

Local snapshots are created based on a schedule using Thin Image. These clones can be used for fast operational recovery.

About snapshot of a clone

Snapshot of a clone
GUID-720A79CF-BEE9-4DDE-9EFB-941D187A51D7-low.png

Snapshot of a clone enables ad-hoc backup for repurposing.

A clone is created using batch or continuous ShadowImage and is made available for repurposing.

A snapshot is taken so that the clone can be reverted quickly if required. Reversion may be needed when performing recurrent testing or configuration change/patching (that may fail) on the repurposed clone.

About replication of a clone

Replication of a clone
GUID-08726BCC-3EC6-45D0-91EA-77BD70656AC9-low.png

Replication of a clone enables remote backup without DR.

An in-system clone is created using ShadowImage based on a schedule. The cloned point-in-time image is then replicated using TrueCopy or Universal Replicator. Having an intermediate clone means that the replication process does not have any impact to the production volume. The replication is performed as a batch copy, so limited RPO is achievable (typically a few hours). For this reason, this technique is not common for high end storage systems.

About remote snapshot

Remote snapshot
GUID-426ACCA5-3CD3-47CE-97A1-2E3F1F8DBFB7-low.png

Remote snapshot enables disaster recovery with remote backup.

A replication is created using TrueCopy, Universal Replicator or Global-Active Device to maintain a replica image of production data on the remote site.

A snapshot is created from the remote replica based on a schedule.

This achieves a high level of disaster recovery, while remote site snapshots enable quick operational recovery even during disaster recovery.

About local and remote snapshots

Local and remote snapshots
GUID-6E8A1992-C39C-41D4-83B6-266CEE784B64-low.png

Local and remote snapshots enable disaster recovery with backups.

A replication is created using TrueCopy, Universal Replicator or Global-Active Device to keep a replica image of production data on the remote site.

Snapshots of the local production data and remote replica are created based on the same schedule.

This keeps the data consistent on the both sites, simplifying the process of operational recovery during site failover, contributing to a better RTO.

About remote clone

Remote clone
GUID-E43F7FB1-15B2-4ADC-B1FC-0CDD807D0C56-low.png

Remote clone enables disaster recovery with remote backup and/or remote repurposing.

A replication is created using TrueCopy, Universal Replicator or Global-Active Device to maintain the replica of production data on the remote site.

Clones of the remote replica are created using ShadowImage based on a schedule.

Replication achieves the highest level of disaster recovery, while remote snapshots/clones enable quick operational recovery even during disaster recovery. Also, repurposing with remote snapshots/clones avoids any performance impact to the production site.

About local & remote clones

Local & remote clones
GUID-1E47B802-E488-4B79-907B-810248E5E105-low.png

Local and remote clones offer the same benefits as local and remote snapshots, with the additional benefit of protecting the data from physical failure of the production or replica volumes.

Note the local and remote clones do not allow the user to recover to a point-in-time. The clone is completely refreshed so that it is equal to the primary or secondary volume. Operational recovery is therefore more limited as a result.

About local snapshot and remote clones

Local snapshot and remote clones
GUID-274817FC-00A5-4A6D-B2D5-0BAD10F4C2C4-low.png

Local snapshot and remote clones enables disaster recovery with tiered backup.

A replication is created using TrueCopy, Universal Replicator or Global-Active Device to keep a replica image of production data on the remote site.

Local snapshots of the production data and remote clones of the replica are created using Thin Image and ShadowImage respectively, based on different schedules.

Keeping different data on each site enables quick recovery on the production site, while satisfying the long-term protection on the remote site.

This scenario can be flipped (i.e. local clone and remote snapshot) so that the clone is maintained at the local site and the snapshots at the remote site. Protector also supports snapshots and clones on the local and remote site concurrently.

About Hitachi Block replication adoption

Ops Center Protector can adopt and manage ShadowImage, TrueCopy, Universal Replicator and Global-Active Device replications that already exist on Hitachi Block storage hardware. An adopted replication can then be managed via the GUI.

To adopt a replication, the user must specify a replication policy that identifies the required source LDEV(s) or Host Group, draw an appropriate data flow that identifies the source and destination storage devices, replication type (and mirror number for SI and UR), mark the policy for adoption and then activate the rules.

Note Any classification can be used to specify the source LDEV(s), including Application and Filesystem Path. Protector will attempt to resolve and adopt them. This is subject to existing limitations.
Adopted replications can be augmented with user defined replication and snapshot operations.

A replication can be dissociated from Ops Center Protector without being removed from the storage hardware.

CautionBe aware of the difference in semantics between dissociating and removing replications from data flows:
  • Dissociating a replication will leave that replication intact on the hardware.
  • Removing a replication from a data flow and redistributing the rules will cause that replication to be torn down on the hardware.
NoteThe following apply when adopting replications:
  • Replications can be adopted from any supported block storage systems.
  • In-system, 2DC and 3DC replications are supported.
  • All valid replication data flows including cascades and multi target are supported.
  • The user must understand and create the data flow prior to adopting.
  • The user needs to know the type of replication that is to be adopted in addition to the Mirror Unit Number. The remaining properties will be discovered from storage.
  • At least one existing replication pair must exist on the selected mirror.
  • The replication being adopted must be in the in the same direction as the one defined in the data flow, i.e. it cannot be in the reversed flow state.
  • Refreshed Thin Image replications cannot be adopted.
  • The source and destination journals specified must match those of the Universal Replicator replication being adopted.
  • Primary volumes replicating on different mirror unit numbers are not supported.
  • Adopting by copy groups or device groups is not supported.
  • There is no check for attempting to adopt the same hardware pairs on multiple, active, coexisting data flows.
  • Limitations for other features still apply if they are relevant to the adopted replication.

When adopting replications Ops Center Protector will behave in the following ways depending on the policy and data flow attributes supplied by the user when attempting to perform the adoption process:

Replication Policy and Data flow Configuration Ops Center Protector's Behaviour
Any
  1. Primary volumes that are not being replicated on the specified mirror will have a secondary volume provisioned and that pair is added to the replication set while respecting the replication type, journal, CTG, and fence level options.
  2. Adopted replications are flagged as such in the hardware resource information along with their CTG ID, journals, mirror unit numbers, and fence levels.
Any
  • If the replication is found to be in PAIR and the Protector mover type is Batch then the replication will be suspended.
  • If the replication is found to be in PSUS/SSUS and the Protector mover type is Continuous then the replication will be resumed.
The user does not select a pool, but does select a mirror unit number
  1. If the mirror unit is assigned, adopt the replication pairs on the mirror.
  2. If the mirror unit is assigned but there are one or more P-VOLs not replicating on that mirror, create S-VOLs for those P-VOLs, in the pool used by the existing S-VOLs (or error if the S-VOLs exist in more than one pool).
  3. If the mirror unit is unassigned, log the error "Cannot provision, no pool selected".
The user selects a mirror number that is not supported for the selected replication type. Valid mirrors numbers are:
  • ShadowImage: 0, 1 or 2
  • TrueCopy: 0 only
  • Universal Replicator: 0, h1, h2 or h3
  • Global-Active Device: 0 or h1
Log the error "Could not determine pool/journal/quorum to use. Ensure that there is at least one existing pair of matching replication type"
Note

When Protector attempts to adopt a TC or GAD replication, it will detect UR pairs on mirror number 0 and, if present, log the error:

Handler 'HitachiVirtualStoragePlatform' call failed: [TrueCopy|GAD] mirror for one or more adopted pairs already in use by UR.

See above for more limitations relating to mirror unit and replication type combinations.
The user changes the mirror number after initial data flow activation The user will be warned via the GUI that the following actions will be taken before they reactivate the rules:
  1. Volumes and relationships that have been adopted will be unadopted
  2. Volumes and relationships that have been created by the user will be destroyed
  3. Replications will be re-adopted and created based on the new mirror number

Data Flows UI Reference

This section describes the Data Flows UI, accessed via the Navigation Sidebar.

For further information, refer to:

Data Flows Inventory

This inventory lists all defined Data Flows whether they are active, inactive or under construction.

Data Flows Inventory
GUID-BA139924-E08F-426A-8271-4D4F7ACC1172-low.png
ControlDescription
GUID-2DB31664-7FB9-441F-8595-06A8E5A178EF-low.png EditEdits an existing data flow in the inventory. The Data Flow Wizard is launched to enable the data flow's attributes to be changed.
NoteIf an active data flow has been modified, but has not been reactivated, a warning triangle will be displayed in the Status field on the corresponding tile.
GUID-E5F1CBC8-471E-4699-9E6D-E16DF64C3EA3-low.pngTagModifies the tags of an existing object from the either the inventory screen or the details screen of the object.
GUID-CF9E13BB-BA11-404F-AB2E-90527141B614-low.png ActivateEnabled only when one or more data flows are selected. Displays the Activate Data Flow Dialog and attempts to compile the rules for the selected data flows. If compilation is successful then the rules can be activated.
CautionActivate data flows in batches not exceeding 20 data flows at a time. Activating more than this simultaneously can result in longer activation times.
GUID-E40CF703-AA92-4BE5-89B4-0D7932D703A1-low.png DeactivateEnabled only when one or more data flows are selected. Deactivates the selected data flows.
CautionIf the deactivated data flows contain storage hardware based operations, replications will be placed in the eligible for tear down state.
GUID-C06C9D94-4B99-4317-AFE8-EF7DB67C63CB-low.png DeleteEnabled only when one or more data flows are selected. Deletes the data flow from the inventory. Active data flows cannot be deleted.
GUID-548F4350-6272-4AC7-AA5F-BEA9EF503E8F-low.png AddCreates a new Data Flow. Launches the Data Flow Wizard to guide you through the process.
GUID-C372D1CB-EFFE-48C3-B9CE-B4D8C812A1F2-low.png Existing Data FlowClick the data flow name to open the Data Flow Details to enable you to view and edit the data flow.
Filter on NameFilters the displayed results based on Data Flow Name.
Filter on User TagsFilters the displayed results based on Tags contained in the data flow.
Filter on NodeFilters the displayed results based on Node Name contained in the data flow.
Filter on PolicyFilters the displayed results based on Policy Name contained in the data flow.
Filter on Active StateFilters the displayed results based on the active state of the data flow.

Activate Data Flow Dialog

This dialog is displayed when one or more data flows are activated.

Rules files control the operation of the Ops Center Protector components on each node in the data flow definition. Rules files are generated by the Rules Compiler after policies have been created, the data flows have been defined and policies assigned to source and destination nodes.

CautionIf you modify a policy or data flow that is currently active, then the data flow must be reactivated before your changes will take effect.
Activate Data Flow(s) Dialog (Compilation succeeded)
GUID-757EE9D4-8276-4F49-9CFC-AAD5E714C7E8-low.png
Activate Data Flow(s) Dialog (Compilation succeeded with warnings)
GUID-1FB3615B-F6BD-45BC-9E61-93342657AAC0-low.png
Activate Data Flow(s) Dialog (Compilation failed)
GUID-0670B03D-7EF4-4738-BDC3-4AAE95128ABA-low.png
ControlDescription
Compilation Details

This lists the output from the rules compiler and shows error, warning and information messages.

If the compilation is successful, it includes a summary of what policies each node in the data flow will be implementing, forwarding or processing. The data flow can be activated.

If the compilation is successful but contains warnings, then the information contained here will assist you in resolving potential issues with the data flow and/or policies. The data flow can be activated.

If the rules fail to compile then the information contained here will assist you in resolving issues with the data flow and/or policies. You may need to iterate through several compilation cycles to remove all compilation errors. The data flow cannot be activated.

CautionAlways inspect the compiler output. A successful compilation may contain warning messages which you should review. Successful compilation only indicates that the rules are valid, it does not imply that they are verified against your data protection requirements. Please regularly inspect the Default Dashboard and related status screens to ensure that your data is being protected in the manner you intended.

Compiler errors are commonly caused by:

  • incorrect or incomplete policies
  • incomplete data flow definitions
  • incorrect or incomplete data flow item attributes
  • incomplete or ambiguous routes for policies
  • missing parameters on destination nodes
  • adding, editing or deleting policy classifications or operations without updating data flows
Compilation OutcomeIndicates one of the following:
  • Blue - the rules are being compiled. Please wait.
  • Green - the rules have been compiled successfully. The data flows and policies are valid (but not necessarily correct) and can be activated.
  • Amber - the rules have been compiled successfully with warnings. The data flows and policies are valid (but not necessarily correct) and can be activated. The data flows and/or policies contain warnings that may need to be rectified.
  • Red - the rules have not been compiled. The data flows and or policies contain errors that must be rectified before the data flows can be activated.
ActivateThis button is only enabled once the rules have compiled successfully. Click to distribute the rules to the affected nodes and activate them. The policies will then take effect depending on the availability of affected nodes, operations, trigger conditions or schedules defined. The Logs Inventory can be used to watch and review the progress of data flow activation.
Note

Rules activation methods depend on the type of policy:

  • Host based:

    An initial 2 minute rules settlement period is applied after activation to allow rules to reach all participating nodes. If any operation is triggered within the settlement period it will be deferred until after it has expired.

  • Block based:

    If the current rules are null or they predate the new rules, the ISM will wait up to 2 minutes for a new rule set to arrive. An information level log will be generated if this delay is invoked:

    Storage proxy node does not have required rules, waiting up to <DELAY> seconds.

    If, after waiting, the rules are still out of date, the ISM will attempt to use the rules it currently has.

  • All:

    If a node in the data flow definition is unavailable, then the rules files cannot be sent to that node; activation continues for the available nodes. When the activation is complete, the new rules are activated on all available nodes. When the absent node becomes available and reconnects to the master, the new rules are sent automatically to that node and activated immediately after they are received.

    If any nodes are deleted from a previously activated data flow definition, then these nodes are deactivated when the new rules are activated. If any such node is not currently available, it is deactivated as soon as it reconnects to the master.

Data Flow Wizard

This wizard is displayed when a new data flow is being created.

The Data Flow Wizard performs two principal functions:

  • Defines the routing of policies that move data from source nodes to destination nodes.
  • Assign policies to nodes on your network, defining what data to backup and the methods of protection to employ.
Data Flow Wizard - Name
GUID-D1096BCC-8748-427D-B03B-35C2E3F2C237-low.png
ControlDescription
NameEnter a name for the Data Flow.
DescriptionOptional. Enter a short description of the Data Flow.
TagsAdd the tags to be associated with the object being created.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Add Tags - Edit User Tags
GUID-6FC23BD2-4900-4358-8126-A522EEE0BB28-low.png
ControlDescription
Edit TypeEnter the Edit Type.
TagsAdd the tags to be associated with the object being created.
CancelDiscards all changes and reverts to the previous page.
ApplyCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Dataflow Wizard - Allocate Dataflow to Resource Group
GUID-FA6B0502-751B-4CCA-BA07-D4D512E68BC3-low.png
ControlDescription
Resource GroupsIt allows the user to view the access permissions for those items granted to specific users and groups.
NoteA single Data Flow can be assigned to multiple resource groups.
Data Flow Wizard
GUID-F1B3D6F4-18E8-4E91-84E4-E8B1B555946C-low.png
Nodes Tab
ControlDescription
GUID-B5E256D3-F815-49A9-8EA5-DC99F840B208-low.png Filter on Node NameFilters the displayed results based on Node Name.
Filter on Node TypeFilters the displayed results based on Node Type.
Nodes ListList the available source and destination nodes that can be dragged onto the Data Flow Workspace.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Node Groups Tab
ControlDescription
GUID-B5E256D3-F815-49A9-8EA5-DC99F840B208-low.png Filter on Group NameFilters the displayed results based on Node Group Name.
Node Group ListList the available node groups that can be dragged onto the Data Flow Workspace.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Data Flow Workspace and Applied Policies
ControlDescription
GUID-419B679D-6360-4BA2-BF08-44CAE6B97C41-low.pngConnect ToEnabled only when a single node is selected on the diagram. Click to create a connection from the selected node to another node on the diagram. When the connector is displayed, click again on the destination node to complete the connection. By default the Mover Type is set to Batch; the properties associated with each connection can be edited via the Mover Settings controls (see 'Mover Settings' controls below). Only compatible elements can be connected.
TipConnections can also be created by dragging a destination node from the node list, over a source node to pick up the connector and then dropping it at the desired location. Alternatively you can drag a source node already on the diagram, over a destination node on the diagram and drop it there.
GUID-C06C9D94-4B99-4317-AFE8-EF7DB67C63CB-low.png DeleteEnabled only when one or more nodes or movers are selected on the diagram. Removes the selected node(s) or mover(s) from the diagram. You can also press the Delete key.
GUID-EC8C73D0-3181-4DF7-B9D8-563FB0D463B9-low.png Node or Node Group IconEach node type is represented by a different icon (see Node Type Icons). The currently selected node is enclosed in a box.

TipIf a warning condition is detected whilst constructing a data flow (e.g. a policy has only been assigned to a source node) then a red warning triangle GUID-533055E2-4720-4869-B9E3-E31DD8F336D0-low.png will appear next to the offending node.
Multiple SelectionClick and drag the cursor over nodes on the workspace to select multiple nodes and movers. Alternatively press and hold CTRL and click multiple nodes.
Select AllClick on the workspace then press CTRL+A to select all nodes and connectors.
Drag and DropSelect and drag one or more nodes or movers to reposition them on the workspace.

If the user moves a node, the icon will appear green if the node is dragged over another node it can be connected to, or red if not.

If the user moves multiple items, the icons will not change color as creating connections from/to multiple items at once is not supported.

If the user attempts to drop multiple items over the top of another item, the action will be canceled.

GUID-C8236C3B-AAB6-4F3F-AD17-2A4BC5E2CEEA-low.png Next NodeClick to move the focus to the next node on the data flow.
GUID-0DE6C1C8-A426-4DA3-97FE-55383D07A8E9-low.pngGUID-54D365E9-A9C2-4089-9004-22885F825538-low.png Zoom In/OutClick the buttons next to the workspace, press +/- on the keypad or hold down the CTRL key whilst using the mouse wheel to zoom in and out.

The current zoom level is displayed between the zoom buttons. Click the zoom level to reset to 100% zoom.

GUID-66D922BA-13C6-4198-94A2-80CDA05B726D-low.png Fit to ScreenClick the button next to the workspace or press the HOME key to select a zoom level that allows the entire data flow to fit within the bounds of the workspace.
PanRight click anywhere on the workspace and drag the cursor.
Applied Policies (all)If no nodes or movers are selected on the data flow workspace, then the area to the right of the workspace lists all the policies that have been applied in the data flow. Click the GUID-A4CD3CA5-3CA1-4C32-9963-8D3E008064C5-low.png button next to the policy name to open the Policy Details for that policy in a new browser tab.
Data Flow Wizard - Source Node Policies
GUID-A4D346E7-0F26-4891-8146-5BD931FECAA1-low.png
Policies
ControlDescription
GUID-DABCC987-1FB4-4099-BADF-625B731DEE4D-low.png Policies (source node selected)If a single source node is selected on the data flow canvas, then the area to the right of the canvas lists all the policies that can be or are applied to that node. Apply policies to the node by clicking the required policy names.
NoteIf the selected policy requires a destination node to complete the policy definition, the source node displays a warning symbol to indicate that the policy definition is incomplete.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Data Flow Wizard - Mover Settings
GUID-68F0C826-904D-432C-82B1-CC73C95929BB-low.png

Movers are connectors that represent how data is transferred from source node to destination node.

The mover provides options on how to route data for policies. If a source node is configured to implement a policy, then any movers connected to it automatically route that policy. If the mover is not configured to implement a policy, then the nodes further downstream are not able to either configure or implement the policy.

Mover Settings
ControlDescription
GUID-DABCC987-1FB4-4099-BADF-625B731DEE4D-low.png Routed Policies (mover selected)If a mover is selected on the data flow canvas, then the area to the right of the canvas lists all the policies that are routed by that mover.
Transfer TypeChanges the mover type to one of the following:
  • Batch (solid line) - data is moved in batches based on a schedule or trigger event.
  • Continuous (dotted line) - used only for Hardware operations.
  • Failover (dashed line) - used only for 3DC delta resync data flows. Defines a suspended replication path, between the secondary and tertiary site, that can be invoked if the primary site fails.

The arrowhead indicates the direction of data flow during normal backup operations.

NoteIt is import to use a mover type that is compatible with the operation type specified in the Policy. This requires an understanding of the snapshot and replication technologies being used. If an incompatible mover type is used, then you will only be notified about the error when you attempt to compile the rules.
  1. Remove the replication from the data flow.
  2. Reactivate the data flow.
  3. Replace the replication on the data flow using the required mover type.
LabelBy default the mover label displayed on the connector is empty. This can be replaced with a label describing the connection on the data flow. This feature can be useful in situations where there is more than one mover connected to a node.
Enable network data compressionFor Host based operations only (ignored for Hardware based operations).

Turns on data compression on the datalink between two nodes.

NoteCompression decreases network utilization, but increases CPU usage.
Bandwidth SettingsFor Host based operations only (ignored for Hardware based operations).

Opens Mover Bandwidth Settings Dialog to enable you to control the amount of bandwidth that data transfers use so that a network connection can be used simultaneously with other applications in the environment.

CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Data Flow Wizard - Destination Node Policies
GUID-83944B36-D8E4-47AE-9F4E-BBE666FDF7E3-low.png
Policies
ControlDescription
GUID-DABCC987-1FB4-4099-BADF-625B731DEE4D-low.png Policies (destination node selected)If a single destination node is selected on the data flow canvas, then the area to the right of the canvas lists all the policies and their contained operations that can be or are applied to that node. Apply operations to the node by clicking the required operation names. A dialog opens, appropriate to the destination node type, that allows those properties to be specified (see Configure Operation Properties).

The policies listed here are restricted to those that have been applied to upstream nodes and those that the destination node has the appropriate capabilities to receive.

NoteA single policy can have several operations within it for a given data classification. The destination indicates which of those operations it can implement by displaying only the supported operation check boxes. The policy name check box cannot be selected by the user, the individual operations within the policy must be selected.
Configure Operation PropertiesWhen an operation is applied to a destination node you may need to specify properties for that operation. Click this button to open a dialog, appropriate to the destination node type, that allows those properties to be specified:

Once the properties are specified, the button is replaced with a summary of the properties. Click the Edit button next to the summary to reopen the dialog to change any properties. The operation type is displayed in italics next to the mover along with the user defined label.

NoteFor Hitachi Block Replication Operation Properties:

Once the operation properties have been configured and the data flow has been activated, the properties cannot be changed. Redistributing rules for the active data flow with edited properties will not change them.

To change the operation properties, the existing data flow must first be deactivated then reactivated with the new properties. Deactivating the data flow will mark the replication eligible for tear down.

CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Mover Bandwidth Settings Dialog

This dialog is used to specify when and how much bandwidth can be used by Protector to transfer data across the datalink between two nodes.

You can assign specific bandwidth constraints for defined time windows so that a network connection can be used simultaneously with other applications in the environment. For instance, you can decide to decrease bandwidth during the workday when bandwidth is required for production and increase it at night to allow backups to utilize maximum bandwidth.

Note If the amount of data that must be replicated continuously exceeds the current bandwidth quota, then data is cached on the source until the bandwidth increases. Cache size on the Source Node can be adjusted to avoid any problems with non-transferred data reaching the cache limit. The configuration file distributor.cfg contains the values MaxDiskCache and MaxMemoryCache which can be increased if the cache limit becomes a problem.
Bandwidth Settings Dialog
GUID-27ECFAB6-1673-4DD7-830F-B78575FAEF93-low.png
ControlDescription
Week GridClick the cells in the grid, corresponding to the hour-long periods where you want to throttle network bandwidth used by Protector, to one of the following levels (cells cycle through the three states each time they are clicked):
  • High - (Dark Green) High Speed throttling at the setting defined below.
  • Low - (Light Green) Low Speed throttling at the setting defined below.
  • None - (White) Default Speed throttling at the setting defined below.
Unlimited/ThottledSets the default speed to the maximum allowed (Unlimited) by the network or to a predefined level (Throttled).
Default SpeedSpecifies the default throttle level. This may be set to any value including 0.
High SpeedSpecifies the high throttle level. This must be set to a non-zero value and greater than the Low Speed level.
Low SpeedSpecifies the low throttle level. This must be set to a non-zero value and less than the High Speed level.
Generation 1 Repository Backup Configuration Dialog

This dialog is displayed when you assign an operation to a Generation 1 Repository node on a data flow.

When using a repository as a destination for a data flow a number of configuration options are available. These destination options for a repository node are contained within Store Templates. Each repository node can be associated with multiple stores templates. There are two default store templates (Standard and Deduplicated), however additional templates can be created.

Repository Backup Configuration Dialog
GUID-4F257CD8-668C-45A1-9806-78D7444D8533-low.png
ControlDescription
Select Destination TemplateEnter or select a Destination Template from the dropdown list. Once selected, the settings of the store template are displayed below.
Manage Store TemplatesClick this link to add to or edit the available destination templates. The Generation 1 Repository Destination Templates Inventory is opened in a new browser tab.
NoteOnce you have finished adding or editing templates in the new tab, simply close the tab and continue working with the Operation Properties Dialog. Any changes you have made to the templates will be applied, although they may not appear in the dialog unless you re-select the template.
SettingsLists the settings for the selected template.
Generation 1 Repository Destination Templates Inventory

This inventory is displayed when managing Destination Templates for a Generation 1 Repository node.

Destination Templates Inventory
GUID-197E9C0B-FC98-4FB3-BC8A-0D0A7DB0958C-low.png
ControlDescription
GUID-2DB31664-7FB9-441F-8595-06A8E5A178EF-low.png EditEdits an existing template in the inventory. The Destination Template Wizard is launched to enable the template's attributes to be changed.
GUID-6B363DCE-3699-4730-A0EE-E3237A04681E-low.png Edit PermissionsEdits an existing template's access permissions. The Access Control Permissions Inventory is launched to enable the template's access permissions to be changed.
GUID-C06C9D94-4B99-4317-AFE8-EF7DB67C63CB-low.png DeleteEnabled only when one or more Templates is selected. Deletes the selected item from the inventory.
GUID-548F4350-6272-4AC7-AA5F-BEA9EF503E8F-low.png AddCreates a new Template. The Destination Template Wizard is launched to guide you through the process.
GUID-E8633D6B-CD31-44C5-A31B-ADAEF2B90A86-low.png System Generated TemplatesAt least two system generated Templates are available when the product is installed. These Templates cannot be deleted since they provide basic functionality. System generated Templates are marked with a GUID-4A766277-67FD-48D2-AEF7-AF06E238CDCE-low.png icon to indicate that they cannot be modified. Click the template's name to open the Destination Template Details to enable the parameters to be viewed.
GUID-E8633D6B-CD31-44C5-A31B-ADAEF2B90A86-low.png User Defined Template(s)Any number of user defined Templates can be created. These are displayed in the inventory. Click the template's name to open the Destination Template Details to enable the parameters to be viewed.
Filter on Template NameFilters the displayed results based on the template name.
Filter on Template TypeFilters the displayed results based on the template type.
Destination Template Wizard

This wizard is displayed when a new Destination Template is being created.

Destination Template Wizard - Specify name
GUID-2C0C2D0D-EC1A-4530-B0D4-4CE4654D7226-low.png
ControlDescription
NameEnter a name for the template.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Destination Template Wizard - Allocate Destination Template to Resource Group
GUID-EF13BA9F-0AA2-4584-A0F6-B8130C710850-low.png
ControlDescription
Resource GroupsIt allows the user to view the access permissions for those items granted to specific users and groups.
NoteA single Destination Template can be assigned to multiple resource groups.
Destination Template Wizard - Configure destination template
GUID-A1874040-807D-495C-8765-8FB574D000A0-low.png
ControlDescription
Source side deduplicationOn occasions where many machines have identical roles within the data flow and also contain very similar data (for instance, OS data and installed software on machines on a corporate network) then a network speedup can be obtained by avoiding sending the same data multiple times. Check this option to make use of this.
Fast incremental based on file modification dateIf this is checked then Ops Center Protector decides what files need resynchronizing based on whether the modification date has changed. This reduces the time taken to resynchronize, but can be disabled if it is known that software is installed that will modify files without updating their size or modification date.
NoteIf only file metadata changes between batch backups (e.g. file ownership or file permissions), then the changes are not captured. These changes are only captured when the file data changes.
Fine change detectionReduces the amount of data transferred and stored during a resynchronization. An entire file is transferred if it has changed and is less than one block in size. (This option should be used sparingly as there is a processing overhead.)
Deduplicate snapshotsEnables the storage group to deduplicate data across snapshots so the storage group only stores a single copy of the data. This option has a processing overhead.
Preserve hard linksPerforms checks so that only one instance of the data is stored, regardless of the number of links pointing to it. This option increases the file system scan time during a resynchronization or batch backup.
NoteAs of Protector 6.5, this option is enabled by default. This is not retrospective and will only apply when new stores are created.
Files excluded from global deduplicationA semicolon-separated list of file extensions to be excluded from the store group’s duplication detection can be entered here.
Automatic ValidationAutomatically resynchronize source to the destination based on a schedule. One of the following options can be selected:
  • Only if required – Will trigger resynchronization only if the destination is out of sync with the source nodes.
  • Always – Will always perform a resynchronization with the source nodes based on the schedule.
  • Never – An automatic validation will never be performed. Checking this option will disable the remaining options.
Select a ScheduleSelect a schedule to trigger automatic validation.
NoteSchedules are created for automatic validation the same way as schedules for a policy. A schedule for automatic validation must use a Trigger, by default a pre created schedule named Trigger at 4 AM every day is selected.
Check all files during validationIgnores file modification date and checks every file for changes.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new settings. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Destination Template Details

This page displays the details of a Destination Template and enables you launch the wizard to edit them.

Destination Template Details
GUID-E74D16AD-35FB-4A16-BC45-B5322C95C03D-low.png
ControlDescription
GUID-2DB31664-7FB9-441F-8595-06A8E5A178EF-low.png EditLaunches the Repository page of the Destination Template Wizard to enable you to edit the template.
GUID-6B363DCE-3699-4730-A0EE-E3237A04681E-low.png PermissionsDisplays the Access Control Permissions Inventory to enable you to view and edit the template's permissions.
Repository Destination SettingsThese are the settings entered via the Repository page of the Destination Template Wizard when the Template was created.
Automatic ValidationThese are the settings entered via the Repository page of the Destination Template Wizard when the Template was created.

Click on the Schedule name to open the Schedule Details in a separate browser tab.

Hitachi Block Snapshot Configuration Wizard

This dialog is displayed when you assign a snapshot operation to a node that hosts data stored on an Hitachi Block device.

CautionWhen a snapshot operation runs, Protector locks the meta_resource of the Block device until the operation completes. Block operations are queued waiting to get the resource lock; this can impact RPO.
Note

All replication and snapshot S-VOLs must be created using free LDEV IDs that are mapped to the meta_resource group, and have virtual LDEV IDs matching their corresponding physical LDEV IDs.

For fully provisioned snapshots and all replications, this applies to the operation that creates that snapshot or replication.

For floating device snapshots and snapshots mounted using cascade mode, this applies to the mount or restore operation.

For fully provisioned snapshots mounted using cascade mode, this applies both to the operation that creates that snapshot and to the mount or restore operation.

If an operation tries to create one or more LDEVs, that operation will fail if there are not enough free LDEV IDs that meet the above conditions.

Note
  • The classifications that are applicable to Hitachi Block Snapshot operations are Path, Application, Hypervisor and Hitachi Block.
  • Protector does not support any Hitachi Block storage LDEV with more than one partition or volume.
  • If Protector is subsequently uninstalled, the existing snapshots and mounted volumes are left in place.
Tip
  • The actions that are performed by Protector as it executes a backup policy are captured by the Log Manager. Detailed logs and their attachments can be exported as a text file. (See Export Logs Dialog for more information.)
  • Details of the hardware resources (LDEVs) used for a particular snapshot can be found in the Hitachi Block Device Details.
Snapshot Configuration Wizard - Differential snapshot (using Thin Image)
GUID-5B5A2141-27E2-4166-B057-0D3070214EBC-low.png
ControlDescription
Storage NodeSpecifies the target storage node where the P-VOL and snapshot S-VOLs are allocated.
Snapshot PoolSpecifies the target storage pool from which snapshot S-VOLs are allocated.

Select a Thin Image Pool or a Dynamic Provisioning Pool.

Caution

Filling a Thin Image pool to capacity will invalidate all snapshot data contained within that pool. All snapshots in the pool will have to be deleted before snapshotting can be resumed.

NoteThin Image and Dynamic Provisioning Pools must be created using Storage Navigator prior to selecting the Target Storage in Protector.
TipIf the P-VOL has an association with a VSM or you select a target storage pool associated with a VSM then Protector will attempt to make the virtual and the physical ID of the S-VOL holding the snapshot identical. For this to work, an LDEV with a physical ID within the defined virtual LDEV range for the selected VSM must available for use.
Advanced ConfigurationClick to step through the advanced configuration option pages of the wizard, described below.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Snapshot Configuration Wizard - Configure Resource Group
GUID-728E2AC2-A03B-4B1C-A9E2-16985968447F-low.png
Specifies the resource group to be used for S-VOLs, in order to support snapshots and replications from VSM volumes (adding volumes to a VSM is performed by adding the volumes to the correct resource group).
ControlDescription
Automatically Selected Allows Protector to automatically select a resource group in the following order of priority:
  1. If there are existing S-VOLs, then the resource group used by those will be selected.
  2. The resource group used by the P-VOLs, if the replication is in-system and the P-VOLs are all in one resource group.
  3. Resource group 0.
NoteIf existing S-VOLs are in multiple resource groups, then the operation will fail with an error.
User Selected

Specify the Resource Group in the associated combo-box.

NoteIf there are existing S-VOLs, then the resource group used by those will be selected. If the existing S-VOLs are in multiple resource groups or in a resource group that contradicts the user specification, then the operation will fail with an error.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Snapshot Configuration Wizard - Provisioning Options
GUID-9BE21756-4ECD-46BE-B92E-19CBE739AF1A-low.png
Specifies how the snapshot is provisioned and its mode of operation. Snapshots are created as differential, in-system snapshots using Thin Image.
CautionBlock storage has a limit of 1024 snapshots per LDEV. Ensure that the RPO and Retention periods are set such that this limit is not exceeded (i.e. Retention / RPO is less than or equal to 1024).
ControlDescription
Consistency groupWhen performing a crash consistent snapshot (for example: using the hardware path classification), use this option to make it truly crash consistent. The hardware has a limited number of consistency groups available so they should be used sparingly.
Fully ProvisionedBy default, this option is deselected and the snapshot will be provisioned using floating devices. Where supported by the array the storage device can store a larger number of snapshots using floating device.
Cascade ModeThe snapshot can be cascaded. This allows Protector to mount either the original snapshot or a duplicate of the original. Mounting a duplicate enables modifications to be made without affecting the original snapshot.

Refer to About cascaded Thin Image snapshots for further guidance on configuring and mounting cascade mode snapshots.

Cascade PoolIf Fully Provisioned is selected above, it may be necessary to specify a dynamic or hybrid pool where Protector can create snapshot S-VOLs.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Snapshot Configuration Wizard - Naming Options
GUID-64FB4001-B9DE-4F1D-B0CB-321A3AFAE1BE-low.png
Specifies how secondary LDEVs and snapshot groups will be named.
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
Snapshot Group NameSpecifies how the snapshot group will be named when the Provisioning Options - Consistency group option is not selected.
  • Automatically Generated - The snapshot group name is generated by Protector based on the rules context ID and policy name.
  • Custom - The snapshot group is named using the string provided (limited to 28 characters). An '@' separator followed by a unique ID is then automatically appended to this name. The unique ID is composed of 3 base 36 characters and is required to enable Protector to manage the groups.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Snapshot Configuration Wizard - DRU Options
GUID-0E520A25-421A-4EF8-861A-D6E4E03F36A2-low.png
If Fully Provisioned is selected on the Provisioning Options page on this wizard, these options are enabled to specify Data Retention Utility (DRU) protection parameters.

CautionProtector cannot mount a DRU protected snapshot. However a cascaded snapshot can be created and mounted if the original has Cascade mode enabled.
ControlDescription
Protection TypeOne of the following can be selected:
  • None - DRU protection is not applied the snapshot
  • Host Read Only (wtd) - Prevents hosts writing to the snapshot LDEV.
  • Full (wtd + svd) - As for wtd, plus the array is prevented from changing the contents of a snapshot LDEV using resync or restore. It also prevents deletion, mapping and unmapping of the snapshot from its LDEV.
NoteDRU can prevent mounting if the OS rejects a read-only volume (mount via cascade snapshot is unaffected, and therefore recommended).
Duration Of Settings Lock (Days)
CautionDRU protection cannot be removed while the lock is active.
Specify a duration in days during which the applied DRU protection cannot be removed. Once this duration has expired the protection is not automatically removed; it must be removed manually.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Snapshot Configuration Wizard - Summary
GUID-54B505DE-ED0C-4237-991E-DE4390B2507F-low.png
Summarizes the configuration settings made by the user.
Hitachi Block Replication Configuration Wizard

This wizard is displayed when you assign a replication operation to a Hitachi Block Device node on a data flow.

CautionWhen a replication operation runs, Protector locks the meta_resource of the Block Device until the operation completes. Block operations are queued waiting to get the resource lock; this can impact RPO.
Note

All replication and snapshot S-VOLs must be created using free LDEV IDs that are mapped to the meta_resource group, and have virtual LDEV IDs matching their corresponding physical LDEV IDs.

For fully provisioned snapshots and all replications, this applies to the operation that creates that snapshot or replication.

For floating device snapshots and snapshots mounted using cascade mode, this applies to the mount or restore operation.

For fully provisioned snapshots mounted using cascade mode, this applies both to the operation that creates that snapshot and to the mount or restore operation.

If an operation tries to create one or more LDEVs, that operation will fail if there are not enough free LDEV IDs that meet the above conditions.

Note
  • The classifications that are applicable to Hitachi Block Replication operations are File Path, Application, Hypervisor and Hitachi Block LDEV.
  • Protector does not support any Hitachi Block storage LDEV with more than one partition or volume.
  • If Protector is subsequently uninstalled, the existing replications and mounted volumes are left in place.
NoteProtector will attempt to match destination LDEV IDs with those used by the source (except for in-system SI and RTI replications where this is impossible). The LDEV ID must be within the range configured for the target storage system and the LDEV ID must not be in use. If the LDEV ID cannot be matched on the target then the first available in-range LDEV ID will be selected.
Note

Once the operation properties have been configured and the data flow has been activated, the properties cannot be changed. Reactivating rules for the active data flow with edited properties will not change them.

To change the operation properties, the existing data flow must first be deactivated then reactivated with the new properties. Deactivating the data flow will tear down the current replication.

Tip
  • The actions that are performed by hardware orchestration as it executes a block backup policy are captured by the Log Manager. Detailed logs and their attachments can be exported as a text file. (See Export Logs Dialog for more information.)
  • Details of the hardware resources (LDEVs) used for a particular replication can be found in the Hitachi Block Device Details.
Replicate Configuration Wizard - Configure New or Adopt Existing Replication
GUID-6881956C-FA84-4988-A5BF-1F7E955D783C-low.png
ControlDescription
Configure new replicationCreates a new replication defined by the user defined parameters. The Configure New Replication page of the wizard is displayed next.
Adopt existing replicationAdopts the replication defined by the user defined parameters from the matching pre-existing one on the hardware. The Adopt Existing Replication page of the wizard is displayed next.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
NoteTo change the Create / Adopt configuration of an existing dataflow it is necessary to deactivate the dataflow, dissociate the replication record, then edit the dataflow to create/adopt and then activate the dataflow.
Replicate Configuration Wizard - Configure New Replication
GUID-D3B2A5CC-80FD-4071-9CC5-507CA10FBBBD-low.png
ControlDescription
Replication TypeSelect the type of replication to create:
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Adopt Existing Replication
GUID-76DEFF1F-5F70-467F-B022-E521512B8F3C-low.png
NoteAdoption requires the rules to be activated and the relevant policy to be triggered. Refer to About Hitachi Block replication adoption before using this feature.
ControlDescription
Replication TypeSelect the type of replication to adopt:
Mirror UnitIdentify the mirror unit number of the replication to be adopted. Adoption requires at least one existing pair on the selected mirror unit.
NoteIf the mirror unit of an active replication is changed after initial data flow activation then:
  1. S-VOLs and pairing relationships for the replication will be destroyed (or dissociated if previously adopted).
  2. The replication will then be recreated (or readopted if previously adopted) on data flow reactivation. A warning is issued by the rules compiler prior to activation.
Copy PaceDetermines how quickly the storage array will be told to copy data for the adopted replication. The array’s default is Slow (3), Protector defaults to Medium (8).
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Adoption Summary
GUID-E1EF7766-FD6C-42A7-AB6C-4AD44521A384-low.png

Shows a summary of the replication configuration specified by the user. All other parameters defining the replication are obtained from the hardware when the data flow is activated and triggered.

Replication Configuration Wizard - In-System Clone
Replicate Configuration Wizard - In System Clone (ShadowImage) - Configure Capacity Savings
GUID-ED8AE230-2AC5-42BB-A53E-DC898B5B2C0F-low.png
ControlDescription
Capacity Saving ModeOne of the following options:
  • Match Source Volumes – When provisioning S-VOLs Capacity Saving will match the settings of the source volumed.
  • Compression - When provisioning S-VOLs Compression will be enabled, the data compression function utilizes the LZ4 compression algorithm to compress the data.
  • Deduplication and Compression - When provisioning S-VOLs Deduplication and Compression will be enabled. The data deduplication function deletes duplicate copies of data written to different addresses in the same pool and maintains only a single copy of the data at one address.
  • None - When provisioning S-VOLs Capacity Saving will not be used
Capacity Saving Process ModeOnly available when a Capacity Saving Mode other than None is selected. Can be one of the following:
  • Inline - When you apply capacity saving with the inline mode the compression and deduplication processing are performed synchronously for new write data. The inline mode minimizes the pool capacity required to store new write data but can impact I/O performance more than the post-process mode.
  • Post Process - When you apply capacity saving with the post-process mode the compression and deduplication processing are performed asynchronously for new write data.
  • Storage Default – match the default option set on the storage array.
  • Match Source Volume – match the settings of the source volume.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - In System Clone (ShadowImage) - Configure Pool etc.
GUID-A3230E54-AB20-46E8-9224-C6DF64B5083F-low.png
ControlDescription
PoolSpecifies the target storage pool from which replication LDEVs are allocated.

Provides a list of available pools giving name and available space.

NoteAll replication types have pool except Asynchronous Remote Failover (Universal Replicator).

Select a Dynamic Provisioning Pool for all replication types.

NoteDynamic Provisioning Pools must be created using Storage Navigator prior to selecting the Target Storage/Pool in Protector.
Mirror UnitThe mirror unit number for the replication can be set to 0, 1 or 2. Select Allocate Automatically to allow Protector to choose one.
NoteIf the mirror unit of an active replication is changed after initial data flow activation then:
  1. S-VOLs and pairing relationships for the replication will be destroyed (or dissociated if previously adopted).
  2. The replication will then be recreated (or readopted if previously adopted) on data flow reactivation. A warning is issued by the rules compiler prior to activation.
Copy PaceDetermines how quickly the storage array copies data. The array’s default is Slow (3), Protector defaults to Medium (8).
Use Consistency GroupAll P-VOLs in a replication are, by default, placed in the same consistency group to ensure consistency of data across all volumes. This option allows this behavior to be disabled.
Quick Resync/SplitIf selected, then Quick Split and Quick Resynch operations are performed by the storage hardware in the background, so that the secondary is available for reading/writing almost immediately after the replication is paused or resynchronized (depending on downstream data flow operations). If deselected then Steady Split and Normal Copy operations are performed in the foreground and the secondary is made available only once the operation is completed.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Configure Resource Group
GUID-1BDB502F-F51F-4A16-98A1-644DCDF759DC-low.png
ControlDescription
Configure Resource GroupSpecifies the resource group to be used for S-VOLs, in order to support snapshots and replications from VSM volumes (adding volumes to a VSM is performed by adding the volumes to the correct resource group).
NoteIf there are existing S-VOLs, then the resource group used by those will be selected. If the existing S-VOLs are in multiple resource groups or in a resource group that contradicts the user selection, then the operation will fail with an error. This setting should not be modified for existing replications.
  • Automatically Selected - Allows Protector to automatically select a resource group in the following order of priority:
    1. If there are existing S-VOLs, then the resource group used by those will be selected.
    2. The resource group used by the P-VOLs, if the replication is in-system and the P-VOLs are all in one resource group.
    3. Resource group 0.
    NoteIf existing S-VOLs are in multiple resource groups, then the operation will fail with an error.
  • User Selected - The user specifies the Resource Group.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Naming Options
GUID-BD211B00-E04F-445B-A503-16D5470BCDB8-low.png
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - In System Clone (ShadowImage) - Summary
GUID-C7B27B4F-2AB2-4775-895C-D58B6F9810FF-low.png

Shows a summary of the replication configuration specified by the user.

Replication Configuration Wizard - Refreshed Snapshot
NoteRefreshed Thin Image must be used with a Batch mover on the data flow.
The replication is created as a single, differential, in-system snapshot using Thin Image. The snapshot is refreshed on each batch resync rather than creating a new snapshot for each resync.

When an RTI data flow is deactivated, the refreshed snapshot is deleted.

Replicate Configuration Wizard - Refreshed Snapshot (Thin Image) - Configure Pool etc.
GUID-041B1B4E-1FA4-4970-BBB7-057B6D6A489C-low.png
ControlDescription
PoolSpecifies the target storage pool from which replication LDEVs are allocated.

Provides a list of available pools giving name and available space.

NoteAll replication types have pool except Asynchronous Remote Failover (Universal Replicator).

Select a Thin Image Pool or a Dynamic Provisioning Pool.

NoteThin Image and Dynamic Provisioning Pools must be created using Storage Navigator prior to selecting the Target Storage/Pool in Protector.
Use Consistency GroupAll P-VOLs in a replication are, by default, placed in the same consistency group to ensure consistency of data across all volumes. This option allows this behavior to be disabled.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Configure Resource Group
GUID-1BDB502F-F51F-4A16-98A1-644DCDF759DC-low.png
ControlDescription
Configure Resource GroupSpecifies the resource group to be used for S-VOLs, in order to support snapshots and replications from VSM volumes (adding volumes to a VSM is performed by adding the volumes to the correct resource group).
NoteIf there are existing S-VOLs, then the resource group used by those will be selected. If the existing S-VOLs are in multiple resource groups or in a resource group that contradicts the user selection, then the operation will fail with an error. This setting should not be modified for existing replications.
  • Automatically Selected - Allows Protector to automatically select a resource group in the following order of priority:
    1. If there are existing S-VOLs, then the resource group used by those will be selected.
    2. The resource group used by the P-VOLs, if the replication is in-system and the P-VOLs are all in one resource group.
    3. Resource group 0.
    NoteIf existing S-VOLs are in multiple resource groups, then the operation will fail with an error.
  • User Selected - The user specifies the Resource Group.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Refreshed Snapshot (Thin Image) - Naming Options
GUID-EC5A7505-7BBD-4810-BFBA-D1C8FDC87241-low.png
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
ControlDescription
Snapshot Group NameSpecifies how the snapshot group will be named.
  • Automatically Generated - The snapshot group name is generated by Protector based on the rules context ID and policy name.
  • Custom - The snapshot group is named using the string provided (limited to 28 characters). An '@' separator followed by a unique ID is then automatically appended to this name. The unique ID is composed of 3 base 36 characters and is required to enable Protector to manage the groups.
Replicate Configuration Wizard - Refreshed Snapshot (Thin Image) - Summary
GUID-D270F1E9-DE41-45E9-9AC2-79F20A6596F1-low.png

Shows a summary of the replication configuration specified by the user.

Replication Configuration Wizard - Asynchronous Remote Clone
Replicate Configuration Wizard - Asynchronous Remote Clone (Universal Replicator) - Configure Capacity Savings
GUID-9D7DFE35-7D15-4DE2-9E86-107AB3AF960F-low.png
ControlDescription
Capacity Saving ModeOne of the following options:
  • Match Source Volumes – When provisioning S-VOLs Capacity Saving will match the settings of the source volumed.
  • Compression - When provisioning S-VOLs Compression will be enabled, the data compression function utilizes the LZ4 compression algorithm to compress the data.
  • Deduplication and Compression - When provisioning S-VOLs Deduplication and Compression will be enabled. The data deduplication function deletes duplicate copies of data written to different addresses in the same pool and maintains only a single copy of the data at one address.
  • None - When provisioning S-VOLs Capacity Saving will not be used
Capacity Saving Process ModeOnly available when a Capacity Saving Mode other than None is selected. Can be one of the following:
  • Inline - When you apply capacity saving with the inline mode the compression and deduplication processing are performed synchronously for new write data. The inline mode minimizes the pool capacity required to store new write data but can impact I/O performance more than the post-process mode.
  • Post Process - When you apply capacity saving with the post-process mode the compression and deduplication processing are performed asynchronously for new write data.
  • Storage Default – match the default option set on the storage array.
  • Match Source Volume – match the settings of the source volume.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Clone (Universal Replicator) - Configure Pool and Mirror Unit
GUID-57C72FD2-702D-4E9F-BFF4-9A19AA60D7FD-low.png
ControlDescription
PoolSpecifies the target storage pool from which replication LDEVs are allocated.

Provides a list of available pools giving name and available space.

NoteAll replication types have pool except Asynchronous Remote Failover (Universal Replicator).

Select a Dynamic Provisioning Pool.

NoteDynamic Provisioning Pools must be created using Storage Navigator prior to selecting the Target Storage/Pool in Protector.
Mirror Unit

The mirror unit number for the replication can be set to 0, h1, h2 or h3.

Select Allocate Automatically to allow Protector to choose one.
NoteIf the mirror unit of an active replication is changed after initial data flow activation then:
  1. S-VOLs and pairing relationships for the replication will be destroyed (or dissociated if previously adopted).
  2. The replication will then be recreated (or readopted if previously adopted) on data flow reactivation. A warning is issued by the rules compiler prior to activation.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Clone (Universal Replicator) - Select Journal Mode
GUID-1B87FF40-8971-444F-9078-2C4881772ACC-low.png
ControlDescription
Select existing journalsSelect this option to use journals that already exist on the source and destination storage arrays. The Select existing journals wizard page is displayed next.
Create new journalsSelect this option to have Protector configure new journals on the source and destination storage arrays. The Create journals wizard page is displayed next.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Clone (Universal Replicator) - Select existing journals
GUID-BF75C9CB-0F40-408D-A215-6B33D22CD471-low.png
NoteThe journals must be unique to each operation and policy in the data flow. The journal should also be used exclusively by Protector.
ControlDescription
Source JournalSpecifies the node and journal on the source side of the replication.
Destination JournalSpecifies the journal on the destination side of the replication.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Clone (Universal Replicator) - Create journals
GUID-250C9BDD-1D2E-48FC-88A9-C9EA7B456CAA-low.png
NoteThe journals must be unique to each operation and policy in the data flow. The journal should also be used exclusively by Protector.
ControlDescription
Source Journal PoolSpecifies the node and pool where the source side journal will be created.
Destination Journal PoolSpecifies the pool where the destination side journal will be created.
Journal SizesSpecifies the size of the source and destination journal.
NoteJournal sizing is based on data change rate and link bandwidth. Refer to your storage array documentation for details.
Journal NamesThe journals will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the journal name to view the available substitution variables.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Clone - Select Remote Path Group
GUID-16568417-3F62-4ED5-B95B-A61F004E8EE6-low.png
ControlDescription
Select Remote Path Group

Specifies the Remote Path Group to be used for the replication.

  • Automatically Selected - Allows Protector to automatically select a Remote Path Group
    NoteFor GAD it is recommended the user specifies a group to avoid sharing with other replications.
  • User Selected - The user specifies the Remote Path Group.
WARNINGYou cannot specify the "User Selected" option and select a path group with an ID of 0. To use a path group with an ID of 0, specify the "Automatically Selected" option. The path group with the lowest ID will be selected (which will be ID 0, if a path group with that ID exists).

Remote path groups are listed in the format:

Path Group Id: 0x51 Port Mappings: 5E <-> 3E

The arrow depicts the direction of the path, either left to right or bidirectional. The arrows can also have a line through them depicting the path is currently broken.

Only remote path groups that are suitable for the replication are displayed, for example for GAD only bidirectional path groups are listed.

CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Configure Resource Group
GUID-1BDB502F-F51F-4A16-98A1-644DCDF759DC-low.png
ControlDescription
Configure Resource GroupSpecifies the resource group to be used for S-VOLs, in order to support snapshots and replications from VSM volumes (adding volumes to a VSM is performed by adding the volumes to the correct resource group).
NoteIf there are existing S-VOLs, then the resource group used by those will be selected. If the existing S-VOLs are in multiple resource groups or in a resource group that contradicts the user selection, then the operation will fail with an error. This setting should not be modified for existing replications.
  • Automatically Selected - Allows Protector to automatically select a resource group in the following order of priority:
    1. If there are existing S-VOLs, then the resource group used by those will be selected.
    2. The resource group used by the P-VOLs, if the replication is in-system and the P-VOLs are all in one resource group.
    3. Resource group 0.
    NoteIf existing S-VOLs are in multiple resource groups, then the operation will fail with an error.
  • User Selected - The user specifies the Resource Group.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Secondary Host Groups (Async. RC)
GUID-0933EB1F-2EA3-4A9B-A1F7-3BD2E78642BB-low.png
CautionIf replication S-VOLs are exposed to a host, the user is responsible for ensuring they are not in use during replication resynchronization. Failing to do so may result in a critical failure of the host.
NoteProtector analyses the LUN IDs of the P-VOLs in all host group paths and, if consistent and available, uses these LUN IDs for all S-VOL host group paths either during initial set-up, mounting, or when adding additional LUNs on demand.

To provide a graceful fallback:

  • If the P-VOL LUN IDs are not consistent in all host group paths Protector will still use a consistent ID for S-VOL mappings, but these will not necessarily match any of the P-VOL LUN IDs.
  • If a P-VOL has a consistent ID in all host group paths, but this LUN ID is not available in all S-VOL host group paths then Protector will choose a different, consistent LUN ID for the S-VOL for all host group paths.
  • When not able to match LUN IDs with those used by P-VOLs and/or the S-VOLs, Protector chooses an unused LUN ID. In order to keep LUN ID’s compatible with VMware and Hyper-V when selecting LUN ID’s Protector will first attempt to select ID’s at or below 255. If this is not it will then attempt at or below 1024, followed by at or below 2048. Finally, at or below the array maximum. A warning will be displayed when an ID can not be found in a range.

Some systems expect LUN ID 0 to be used only as a boot volume. Protector will therefore only use LUN ID 0 for the S-VOL host group paths if the P-VOL LUN ID is 0.

ControlDescription
Use Protector Provisioned Host GroupProtector will create a LUN path from each S-VOL it provisions in a placeholder host group. If this option is not selected, at least one Secondary Host Group must be specified below.
Enforce LUN ID MatchingFor environments where LUN ID consistency is mandatory, selecting this option will cause the replication to fail if:
  • The P-VOL does not have a consistent LUN ID in all host group paths.
  • The P-VOL LUN ID is not available in all S-VOL host group paths.
Secondary Host GroupsSpecify zero or more host groups that Protector will configure to provide access to the S-VOL(s) when configuring replication scenarios. If no host groups are specified here then Protector will place the S-VOL(s) in it's dummy host group.

Click the Add Host Group button to insert another Host Group selection control.

Click the Remove button next to a Host Group selection control to delete it.

Note

If a LUN path to be created already exists, Protector will not attempt to add it again, or to change its ID.

The specified host groups must be in the same resource group as the secondary volumes.

For GAD replications, if the host group names and port IDs match between primary and secondary storage nodes, Protector will attempt to match the LUN IDs used for the S-VOLs with those of the respective P-VOLs. If this cannot be achieved then a warning will be logged and the next available LUN ID will be used.

TipUse this option to specify host groups that enable access to S-VOL(s) when configuring GAD cross-path and multi-path and other replication scenarios where the S-VOL(s) will need to be accessed (e.g. during failover).
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Naming Options
GUID-BD211B00-E04F-445B-A503-16D5470BCDB8-low.png
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Clone (Universal Replicator) - Summary
GUID-AE6EBEDF-3D4E-4204-ABBA-7179065AF7B7-low.png

Shows a summary of the replication configuration specified by the user.

Replication Configuration Wizard - Asynchronous Remote Failover
Replicate Configuration Wizard - Asynchronous Remote Failover (Universal Replicator) - Configure Capacity Savings
GUID-95358E64-AE06-49ED-A411-4FDA2A5D427A-low.png
ControlDescription
Capacity Saving ModeOne of the following options:
  • Match Source Volumes – When provisioning S-VOLs Capacity Saving will match the settings of the source volumed.
  • Compression - When provisioning S-VOLs Compression will be enabled, the data compression function utilizes the LZ4 compression algorithm to compress the data.
  • Deduplication and Compression - When provisioning S-VOLs Deduplication and Compression will be enabled. The data deduplication function deletes duplicate copies of data written to different addresses in the same pool and maintains only a single copy of the data at one address.
  • None - When provisioning S-VOLs Capacity Saving will not be used
Capacity Saving Process ModeOnly available when a Capacity Saving Mode other than None is selected. Can be one of the following:
  • Inline - When you apply capacity saving with the inline mode the compression and deduplication processing are performed synchronously for new write data. The inline mode minimizes the pool capacity required to store new write data but can impact I/O performance more than the post-process mode.
  • Post Process - When you apply capacity saving with the post-process mode the compression and deduplication processing are performed asynchronously for new write data.
  • Storage Default – match the default option set on the storage array.
  • Match Source Volume – match the settings of the source volume.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Failover (Universal Replicator Failover) - Configure Mirror Unit
GUID-CB252D64-523C-4C59-AFB7-F8A155FDFB74-low.png
ControlDescription
Mirror Unit

The mirror unit number for the replication can be set to 0, h1, h2 or h3.

Select Allocate Automatically to allow Protector to choose one.
NoteIf the mirror unit of an active replication is changed after initial data flow activation then:
  1. S-VOLs and pairing relationships for the replication will be destroyed (or dissociated if previously adopted).
  2. The replication will then be recreated (or readopted if previously adopted) on data flow reactivation. A warning is issued by the rules compiler prior to activation.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Failover (Universal Replicator Failover) - Select Journal Mode
GUID-1B87FF40-8971-444F-9078-2C4881772ACC-low.png
ControlDescription
Select existing journalsSelect this option to use journals that already exist on the source and destination storage arrays. The Select existing journals wizard page is displayed next.
Create new journalsSelecf this option to have Protector configure new journals on the sournce and destination storage arrays. The Create journals wizard page is displayed next.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Failover (Universal Replicator Failover) - Select existing journals
GUID-DF7DB497-EB7E-485D-B515-B0326FFB5468-low.png
NoteThe journal must be unique to each operation and policy in the data flow. The journal should also be used exclusively by Protector.
ControlDescription
Source JournalSpecifies the node and journal on the source side of the replication.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Failover (Universal Replicator Failover) - Create journals
GUID-A5889DC5-B685-4E2C-9804-E036E84197F3-low.png
NoteThe journal must be unique to each operation and policy in the data flow. The journal should also be used exclusively by Protector.
ControlDescription
Source Journal PoolSpecifies the node and pool where the source side journal will be created.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Data Flow Wizard HPE Block Replication Configuration - Select Remote Path Group
GUID-4036ADC9-96B5-4591-8E6F-84BB8679D195-low.png
ControlDescription
Select Remote Path Group

Specifies the Remote Path Group to be used for the replication.

  • Automatically Selected - Allows Protector to automatically select a Remote Path Group
    NoteFor GAD it is recommended the user specifies a group to avoid sharing with other replications.
  • User Selected - The user specifies the Remote Path Group.
WARNINGYou cannot specify the "User Selected" option and select a path group with an ID of 0. To use a path group with an ID of 0, specify the "Automatically Selected" option. The path group with the lowest ID will be selected (which will be ID 0, if a path group with that ID exists).

Remote path groups are listed in the format:

Path Group Id: 0x51 Port Mappings: 5E <-> 3E

The arrow depicts the direction of the path, either left to right or bidirectional. The arrows can also have a line through them depicting the path is currently broken.

Only remote path groups that are suitable for the replication are displayed, for example for GAD only bidirectional path groups are listed.

CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Naming Options
GUID-BD211B00-E04F-445B-A503-16D5470BCDB8-low.png
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Asynchronous Remote Failover (Universal Replicator Failover) - Summary
GUID-191D591E-8142-4D2C-A439-4B24A4B75EC0-low.png

Shows a summary of the replication configuration specified by the user.

Replication Configuration Wizard - Synchronous Remote Clone
Replicate Configuration Wizard - Synchronous Remote Clone (TrueCopy) - Configure Capacity Savings
GUID-2128B77F-93C5-4E52-A031-5B642249631F-low.png
ControlDescription
Capacity Saving ModeOne of the following options:
  • Match Source Volumes – When provisioning S-VOLs Capacity Saving will match the settings of the source volumed.
  • Compression - When provisioning S-VOLs Compression will be enabled, the data compression function utilizes the LZ4 compression algorithm to compress the data.
  • Deduplication and Compression - When provisioning S-VOLs Deduplication and Compression will be enabled. The data deduplication function deletes duplicate copies of data written to different addresses in the same pool and maintains only a single copy of the data at one address.
  • None - When provisioning S-VOLs Capacity Saving will not be used
Capacity Saving Process ModeOnly available when a Capacity Saving Mode other than None is selected. Can be one of the following:
  • Inline - When you apply capacity saving with the inline mode the compression and deduplication processing are performed synchronously for new write data. The inline mode minimizes the pool capacity required to store new write data but can impact I/O performance more than the post-process mode.
  • Post Process - When you apply capacity saving with the post-process mode the compression and deduplication processing are performed asynchronously for new write data.
  • Storage Default – match the default option set on the storage array.
  • Match Source Volume – match the settings of the source volume.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Synchronous Remote Clone (TrueCopy) - Configure Pool etc.
GUID-5B3C838F-D587-4EE3-9C24-2B2F7C2EF8BD-low.png
ControlDescription
PoolSpecifies the target storage pool from which replication LDEVs are allocated.

Provides a list of available pools giving name and available space.

NoteAll replication types have pool except Asynchronous Remote Failover (Universal Replicator).

Select a Dynamic Provisioning Pool.

NoteDynamic Provisioning Pools must be created using Storage Navigator prior to selecting the Target Storage/Pool in Protector.
IgnoreWhen writing to the primary volumes, confirm the write regardless of whether the data has been copied successfully to secondary volumes (Fence Level Never) .
Fail source writeOnly confirm primary volume writes if the data is successfully copied to the secondary volume, generate a write error if not (Fence Level Data).
Fail source write if not in error statusOnly generate a write error if the data is not successfully copied to the secondary volume and the replication has not been put into an error status PSUE (Fence Level Status).
Copy PaceDetermines how quickly the storage array copies data. The array’s default is Slow (3), Protector defaults to Medium (8).
Use Consistency GroupAll P-VOLs in a replication are, by default, placed in the same consistency group to ensure consistency of data across all volumes. This option allows this behaviour to be disabled.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Data Flow Wizard Hitachi Block Replication Configuration - Select Remote Path Group TC Pool
GUID-ECBCCC15-D724-426C-8ED7-DEFD165613E1-low.png
ControlDescription
Select Remote Path Group

Specifies the Remote Path Group to be used for the replication.

  • Automatically Selected - Allows Protector to automatically select a Remote Path Group
    NoteFor GAD it is recommended the user specifies a group to avoid sharing with other replications.
  • User Selected - The user specifies the Remote Path Group.
WARNINGYou cannot specify the "User Selected" option and select a path group with an ID of 0. To use a path group with an ID of 0, specify the "Automatically Selected" option. The path group with the lowest ID will be selected (which will be ID 0, if a path group with that ID exists).

Remote path groups are listed in the format:

Path Group Id: 0x51 Port Mappings: 5E <-> 3E

The arrow depicts the direction of the path, either left to right or bidirectional. The arrows can also have a line through them depicting the path is currently broken.

Only remote path groups that are suitable for the replication are displayed, for example for GAD only bidirectional path groups are listed.

CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Configure Resource Group
GUID-1BDB502F-F51F-4A16-98A1-644DCDF759DC-low.png
ControlDescription
Configure Resource GroupSpecifies the resource group to be used for S-VOLs, in order to support snapshots and replications from VSM volumes (adding volumes to a VSM is performed by adding the volumes to the correct resource group).
NoteIf there are existing S-VOLs, then the resource group used by those will be selected. If the existing S-VOLs are in multiple resource groups or in a resource group that contradicts the user selection, then the operation will fail with an error. This setting should not be modified for existing replications.
  • Automatically Selected - Allows Protector to automatically select a resource group in the following order of priority:
    1. If there are existing S-VOLs, then the resource group used by those will be selected.
    2. The resource group used by the P-VOLs, if the replication is in-system and the P-VOLs are all in one resource group.
    3. Resource group 0.
    NoteIf existing S-VOLs are in multiple resource groups, then the operation will fail with an error.
  • User Selected - The user specifies the Resource Group.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Secondary Host Groups (Sync. RC)
GUID-8D42A8B5-3E7B-46BA-9F9F-34D7B7D0E913-low.png
CautionIf replication S-VOLs are exposed to a host, the user is responsible for ensuring they are not in use during replication resynchronization. Failing to do so may result in a critical failure of the host.
NoteProtector analyses the LUN IDs of the P-VOLs in all host group paths and, if consistent and available, uses these LUN IDs for all S-VOL host group paths either during initial set-up, mounting, or when adding additional LUNs on demand.

To provide a graceful fallback:

  • If the P-VOL LUN IDs are not consistent in all host group paths Protector will still use a consistent ID for S-VOL mappings, but these will not necessarily match any of the P-VOL LUN IDs.
  • If a P-VOL has a consistent ID in all host group paths, but this LUN ID is not available in all S-VOL host group paths then Protector will choose a different, consistent LUN ID for the S-VOL for all host group paths.
  • When not able to match LUN IDs with those used by P-VOLs and/or the S-VOLs, Protector chooses an unused LUN ID. In order to keep LUN ID’s compatible with VMware and Hyper-V when selecting LUN ID’s Protector will first attempt to select ID’s at or below 255. If this is not it will then attempt at or below 1024, followed by at or below 2048. Finally, at or below the array maximum. A warning will be displayed when an ID can not be found in a range.

Some systems expect LUN ID 0 to be used only as a boot volume. Protector will therefore only use LUN ID 0 for the S-VOL host group paths if the P-VOL LUN ID is 0.

ControlDescription
Use Protector Provisioned Host GroupProtector will create a LUN path from each S-VOL it provisions in a placeholder host group. If this option is not selected, at least one Secondary Host Group must be specified below.
Enforce LUN ID MatchingFor environments where LUN ID consistency is mandatory, selecting this option will cause the replication to fail if:
  • The P-VOL does not have a consistent LUN ID in all host group paths.
  • The P-VOL LUN ID is not available in all S-VOL host group paths.
Secondary Host GroupsSpecify zero or more host groups that Protector will configure to provide access to the S-VOL(s) when configuring replication scenarios. If no host groups are specified here then Protector will place the S-VOL(s) in it's dummy host group.

Click the Add Host Group button to insert another Host Group selection control.

Click the Remove button next to a Host Group selection control to delete it.

Note

If a LUN path to be created already exists, Protector will not attempt to add it again, or to change its ID.

The specified host groups must be in the same resource group as the secondary volumes.

For GAD replications, if the host group names and port IDs match between primary and secondary storage nodes, Protector will attempt to match the LUN IDs used for the S-VOLs with those of the respective P-VOLs. If this cannot be achieved then a warning will be logged and the next available LUN ID will be used.

TipUse this option to specify host groups that enable access to S-VOL(s) when configuring GAD cross-path and multi-path and other replication scenarios where the S-VOL(s) will need to be accessed (e.g. during failover).
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Naming Options
GUID-BD211B00-E04F-445B-A503-16D5470BCDB8-low.png
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Synchronous Remote Clone (TrueCopy) - Summary
GUID-98E0B86C-5776-42B9-98A2-C5A495E8C949-low.png

Shows a summary of the replication configuration specified by the user.

Replication Configuration Wizard - Active-Active Remote Clone
Note
  • GAD is only available on VSP G series because virtualized LDEVs are required.
  • Configuration or adoption of GAD cross-path scenarios requires CCI version 01-41-03/03 or greater to be installed on the ISM node.
  • Primary volumes must be set up within a host group prior to configuring a GAD replication using Protector.
  • GAD replications require the P-VOL and S-VOL to have matching virtual serial numbers and virtual LDEV IDs. Selecting Automatically Selected in the Configure Resource Group page of the wizard ensures this is done.
  • Any LUN path to the secondary volume will become inaccessible if the GAD replication is deleted from Protector (i.e. torn down). This occurs because the virtual LDEV ID is automatically deleted from the S-VOL, causing host I/Os to be rejected. To recover from this, either recreate the GAD pair (if the pair was deleted unintentionally) or assign a new virtual ID to the clone S-VOL
Replicate Configuration Wizard - Active-Active Remote Clone (Global-Active Device) - Configure Capacity Savings
GUID-536D6F65-8DFD-48E7-8578-110B999ADFF9-low.png
ControlDescription
Capacity Saving ModeOne of the following options:
  • Match Source Volumes – When provisioning S-VOLs Capacity Saving will match the settings of the source volumed.
  • Compression - When provisioning S-VOLs Compression will be enabled, the data compression function utilizes the LZ4 compression algorithm to compress the data.
  • Deduplication and Compression - When provisioning S-VOLs Deduplication and Compression will be enabled. The data deduplication function deletes duplicate copies of data written to different addresses in the same pool and maintains only a single copy of the data at one address.
  • None - When provisioning S-VOLs Capacity Saving will not be used
Capacity Saving Process ModeOnly available when a Capacity Saving Mode other than None is selected. Can be one of the following:
  • Inline - When you apply capacity saving with the inline mode the compression and deduplication processing are performed synchronously for new write data. The inline mode minimizes the pool capacity required to store new write data but can impact I/O performance more than the post-process mode.
  • Post Process - When you apply capacity saving with the post-process mode the compression and deduplication processing are performed asynchronously for new write data.
  • Storage Default – match the default option set on the storage array.
  • Match Source Volume – match the settings of the source volume.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Active-Active Remote Clone (Global-Active Device) - Configure Pool etc.
GUID-7EB5036E-C1A6-4BDF-A4CF-9EE20C442139-low.png
ControlDescription
PoolSpecifies the target storage pool from which replication LDEVs are allocated.

Provides a list of available pools giving name and available space.

NoteAll replication types have pool except Asynchronous Remote Failover (Universal Replicator).

Select a Dynamic Provisioning Pool.

NoteDynamic Provisioning Pools must be created using Storage Navigator prior to selecting the Target Storage/Pool in Protector.
Target QuorumSelects the volume to use as the quorum disk.
Note
  • A Quorum disk is required to manage each GAD replication pair. It is best practice to allocate a separate Quorum disk for each pair.
  • Both Quorum and Quorum-less disks are available for selection.
Mirror Unit

The mirror unit number for the replication can be set to 0 or h1.

Select Allocate Automatically to allow Protector to choose one.
Copy PaceDetermines how quickly the storage array copies data. The array’s default is Slow (3), Protector defaults to Medium (8).
Use Consistency GroupAll P-VOLs in a replication are, by default, placed in the same consistency group to ensure consistency of data across all volumes. This option allows this behavior to be disabled.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Data Flow Wizard Hitachi Block Replication Configuration - Select Remote Path Group
GUID-FB64BB32-1CC8-42F8-A4E9-F2D46DFA3FC7-low.png
ControlDescription
Select Remote Path Group

Specifies the Remote Path Group to be used for the replication.

  • Automatically Selected - Allows Protector to automatically select a Remote Path Group
    NoteFor GAD it is recommended the user specifies a group to avoid sharing with other replications.
  • User Selected - The user specifies the Remote Path Group.
WARNINGYou cannot specify the "User Selected" option and select a path group with an ID of 0. To use a path group with an ID of 0, specify the "Automatically Selected" option. The path group with the lowest ID will be selected (which will be ID 0, if a path group with that ID exists).

Remote path groups are listed in the format:

Path Group Id: 0x51 Port Mappings: 5E <-> 3E

The arrow depicts the direction of the path, either left to right or bidirectional. The arrows can also have a line through them depicting the path is currently broken.

Only remote path groups that are suitable for the replication are displayed, for example for GAD only bidirectional path groups are listed.

CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Configure Resource Group
GUID-1BDB502F-F51F-4A16-98A1-644DCDF759DC-low.png
ControlDescription
Configure Resource GroupSpecifies the resource group to be used for S-VOLs, in order to support snapshots and replications from VSM volumes (adding volumes to a VSM is performed by adding the volumes to the correct resource group).
NoteIf there are existing S-VOLs, then the resource group used by those will be selected. If the existing S-VOLs are in multiple resource groups or in a resource group that contradicts the user selection, then the operation will fail with an error. This setting should not be modified for existing replications.
  • Automatically Selected - Allows Protector to automatically select a resource group in the following order of priority:
    1. If there are existing S-VOLs, then the resource group used by those will be selected.
    2. The resource group used by the P-VOLs, if the replication is in-system and the P-VOLs are all in one resource group.
    3. Resource group 0.
    NoteIf existing S-VOLs are in multiple resource groups, then the operation will fail with an error.
  • User Selected - The user specifies the Resource Group.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Secondary Host Groups (GAD)
GUID-C8483ABE-4AB8-469B-81F9-8E9967EFD684-low.png
CautionIf replication S-VOLs are exposed to a host, the user is responsible for ensuring they are not in use during replication resynchronization. Failing to do so may result in a critical failure of the host.
NoteProtector analyses the LUN IDs of the P-VOLs in all host group paths and, if consistent and available, uses these LUN IDs for all S-VOL host group paths either during initial set-up, mounting, or when adding additional LUNs on demand.

To provide a graceful fallback:

  • If the P-VOL LUN IDs are not consistent in all host group paths Protector will still use a consistent ID for S-VOL mappings, but these will not necessarily match any of the P-VOL LUN IDs.
  • If a P-VOL has a consistent ID in all host group paths, but this LUN ID is not available in all S-VOL host group paths then Protector will choose a different, consistent LUN ID for the S-VOL for all host group paths.
  • When not able to match LUN IDs with those used by P-VOLs and/or the S-VOLs, Protector chooses an unused LUN ID. In order to keep LUN ID’s compatible with VMware and Hyper-V when selecting LUN ID’s Protector will first attempt to select ID’s at or below 255. If this is not it will then attempt at or below 1024, followed by at or below 2048. Finally, at or below the array maximum. A warning will be displayed when an ID can not be found in a range.

Some systems expect LUN ID 0 to be used only as a boot volume. Protector will therefore only use LUN ID 0 for the S-VOL host group paths if the P-VOL LUN ID is 0.

ControlDescription
Use Protector Provisioned Host GroupProtector will create a LUN path from each S-VOL it provisions in a placeholder host group. If this option is not selected, at least one Secondary Host Group must be specified below.
Enforce LUN ID MatchingFor environments where LUN ID consistency is mandatory, selecting this option will cause the replication to fail if:
  • The P-VOL does not have a consistent LUN ID in all host group paths.
  • The P-VOL LUN ID is not available in all S-VOL host group paths.
Secondary Host GroupsSpecify zero or more host groups that Protector will configure to provide access to the S-VOL(s) when configuring replication scenarios. If no host groups are specified here then Protector will place the S-VOL(s) in it's dummy host group.

Click the Add Host Group button to insert another Host Group selection control.

Click the Remove button next to a Host Group selection control to delete it.

Note

If a LUN path to be created already exists, Protector will not attempt to add it again, or to change its ID.

The specified host groups must be in the same resource group as the secondary volumes.

For GAD replications, if the host group names and port IDs match between primary and secondary storage nodes, Protector will attempt to match the LUN IDs used for the S-VOLs with those of the respective P-VOLs. If this cannot be achieved then a warning will be logged and the next available LUN ID will be used.

TipUse this option to specify host groups that enable access to S-VOL(s) when configuring GAD cross-path and multi-path and other replication scenarios where the S-VOL(s) will need to be accessed (e.g. during failover).
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Naming Options
GUID-BD211B00-E04F-445B-A503-16D5470BCDB8-low.png
ControlDescription
Secondary Logical Device NameSpecifies how S-VOLs will be named:
  • Match Origin - The S-VOL will be given the same name as that used for the origin P-VOL (i.e. the left-most volume in the data flow).
  • Custom - The S-VOL will be named using the naming rule provided. The naming rule can consist of literal strings and/or one or more substitution variables listed. Click Display variables which can be used for the secondary LDEVs' name to view the available substitution variables:
    • %ORIGIN_SERIAL% - S/N of leftmost array in data flow. E.g. output string: 210613
    • %ORIGIN_LDEV_ID% - ID of leftmost LDEV in data flow. E.g. output string: 00:3A:98
    • %ORIGIN_LDEV_NAME% - name of leftmost LDEV in data flow.
    • %PRIMARY_SERIAL% - S/N of primary array in this operation. E.g. output string: 442302
    • %PRIMARY_LDEV_ID% - ID of primary LDEV in this operation. E.g. output string: 00:4C:EB
    • %PRIMARY_LDEV_NAME% - name of primary LDEV in this operation.
    • %SECONDARY_SERIAL% - S/N of secondary array in this operation. E.g. output string: 356323
    • %SECONDARY_LDEV_ID% - ID of secondary LDEV in this operation. E.g. output string: 01:F4:35
    • %CREATION_DATE% - date secondary LDEV was created by this operation. E.g. output string: 20180427
    • %CREATION_TIME% - time secondary LDEV was created by this operation. 1130
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Replicate Configuration Wizard - Active-Active Remote Clone (Global-Active Device) - Summary
GUID-AD642A0F-D8BF-42E5-92D2-11A350732118-low.png

Shows a summary of the replication configuration specified by the user.

Hitachi Block Mount Configuration Wizard

This wizard is displayed when you assign a mount operation to a Hitachi Block storage node on a data flow.

Note
  • When mounting a snapshot that contains a mounted subdirectory, the subdirectory will be mounted as expected. However, the volume referenced by the subdirectory will also be mounted as a separate drive. Unmount will unmount both the expected and additional mounts.
  • The automated mount operation is not suitable for Oracle ASM. The disks are be presented to the OS but need to be manually mounted.
NoteOperating system specific behaviour:
OSNote
LinuxWhen mounting a Linux snapshot to a different Linux machine; in order for the user and group names to be displayed correctly the users and groups must have the same ID's as the source.
SUSE LinuxSUSE Linux is not able to perform automated mount operations if hosted on VMware. (RHEL and OEL Linux work as expected).
AIX The system command importvg is invoked by Protector to mount snapshots to the user specified location. importvg creates a directory for the user specified location plus an empty directory corresponding to the original mount point. Neither of these directories are removed by Protector when the snapshot is eventually unmounted, although neither will contain any data.
Mount Configuration Wizard - Mount Operation Type
GUID-E6840BD5-FFD9-4010-9A2D-F03874B0B221-low.png
ControlDescription
RepurposePerform the mount sequence for the repurposing scenario (refer to About the repurposing mount sequence for details).
NoteRepurpose is not valid for continuous replication; the rules compiler will issue an error if a continuous mover is used on the data flow in conjunction with this type of mount operation.
Proxy BackupPerform the mount sequence for the proxy backup scenario (refer to About the proxy backup mount sequence for details).
NoteA proxy backup of a live replication will generate a warning in the compiler to tell the user that their replication will be paused until the proxy backup is complete.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Mount Configuration Wizard - Mount Level
GUID-C0C701FD-5B25-4BA8-AFBB-0440A8586F5D-low.png
ControlDescription
SANAdds the replication to a Host Group.
HostAdds the replication to a Host Group and confirms that it is available from the specified Host.
OSAdds the replication to a Host Group and mounts it on the specified Host's operating system.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Mount Configuration Wizard - Select Host Group (SAN level mount only)
GUID-7D6C44FA-4C16-4A0F-BB83-22705B93EF03-low.png
ControlDescription
Host GroupManually specify or select a host group to use to expose a snapshot or replication.
NoteWhen exposing an LDEV, the host group specified must be in the same resource group as the secondary volumes.
Add Host GroupClick this button to add host groups when specifying a multi-path mount operation.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.
Mount Configuration Wizard - Host Group (Host and OS level mount only)
GUID-E27544E3-EF1D-45C5-8FFA-8A76EF7C7619-low.png
ControlDescription
Automatically discoverProtector will automatically select a host group to use to expose the snapshot or replication.
SelectedThe user must specify one or more host groups to use to expose the snapshot or replication.
Select a Host GroupManually specify or select a host group to use to expose a snapshot or replication.
NoteWhen exposing an LDEV, the host group specified must be in the same resource group as the secondary volumes.
Add Host GroupClick this button to add host groups when specifying a multi-path mount operation.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Mount Configuration Wizard - Select Host (Host and OS level mount only)
GUID-2E382533-EC1F-48C2-B163-BC5D80639A3E-low.png
ControlDescription
OS HostSpecify the machine to mount to or expose to.
NoteUnless the user selects a host group, the machine where the volume is to be mounted must have an existing volume on the same storage device. If there is no connection between the mount host and the block storage device then Protector will fail the mount operation after a timeout of 30 minutes.
VMware hostExpose the volumes to the specified VMware host to enable them to be mounted to the VM.
NoteExposing using a VMware host requires that a VMware ESXi/vCenter node be configured.
DatastoreSpecifies a destination datastore when mounting to a VMware virtual machine which is part of a cluster, in which case the default datastore may not be a suitable place to save the RDM mount information. If the datastore field is left blank then mount information is saved alongside the VM.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
NextTakes the user to the next screen in the wizard.
Mount Configuration Wizard - Specify mount location (OS level mount only)
GUID-BE8C28AF-C274-4F5D-8494-BA5DAB8F41E3-low.png
ControlDescription
OriginalThe replication is mounted at its original location.
NoteMounting at the original location will fail if there is already a volume mounted at that location.
Drive starting at letterWhen mounting a replication that contains multiple volumes, the first volume will mount at the specified drive and subsequent drives are used for each additional volume.
DirectoryWhen mounting a replication that contains multiple volumes, each volume will be assigned a separate subdirectory. Click Browse to view the drives and directories on the selected host. To create a new directory, type in the required path.
NoteProtector does not check to make sure the directory selected as the mount point is empty. This means it is possible to mount a snapshot inside or even over the top of another mounted volume. This should be avoided.
CancelDiscards all changes and reverts to the previous page.
PreviousTakes the user to the previous screen in the wizard.
FinishCommits the new changes. Pages currently open in other tabs and windows will need to be reloaded before the changes are seen by the user.

Data Flow Details

This page displays the details of a Data Flow and enables you launch the wizard to edit, activate, deactivate and change access permissions.

Data Flow Details
GUID-4059377A-8A55-4B8A-BCDB-5C54C08E06AC-low.png
ControlDescription
GUID-2DB31664-7FB9-441F-8595-06A8E5A178EF-low.png EditLaunches the Data Flow Wizard to enable you to edit the data flow.
GUID-E5F1CBC8-471E-4699-9E6D-E16DF64C3EA3-low.pngTagModifies the tags of an existing object from the either the inventory screen or the details screen of the object.
GUID-CF9E13BB-BA11-404F-AB2E-90527141B614-low.png CompileDisplays the Activate Data Flow Dialog and attempts to compile the rules for the data flow. If compilation is successful then the rules can be activated.
GUID-E40CF703-AA92-4BE5-89B4-0D7932D703A1-low.png DeactivateDeactivates the selected data flow and removes its rules.
CautionIf the deactivated data flow contains storage hardware based operations, this will remove the pairing relationships.
GUID-6B363DCE-3699-4730-A0EE-E3237A04681E-low.png Edit PermissionsDisplays the Access Control Permissions Inventory to enable you to view and edit the data flow's permissions.
Data Flow CanvasShows the Data Flow diagram in read-only mode.
Applied PoliciesThe area to the right of the workspace lists all the policies that have been applied in the data flow. Click the node or mover of interest to view the applied policies and mover settings.
SummaryThe area to the right of the workspace lists the Status of the data flow.
StatusThe Status of the data flow can be Active or Inactive.
Activated If the data flow is Active then Activated shows the date and time the data flow was activated.
Modified Since Activation If the data flow is Active then Modified Since Activation shows if the data flow has been modified since it was activated, this can be either Yes or No. If the value is Yes then the currently distributed rules are different to those shown in the data flow.
MonitorDisplays the Monitor Details screen for this data flow

Data Flow Tasks

This section describes data flow configuration tasks that users will perform with Ops Center Protector.

Refer to Data Protection Workflows for detailed descriptions of specific Repository, Hitachi Block data protection scenarios.

For further information, refer to:

How to create a data flow

Before you begin

Ensure that the policies that you want to assign have been defined, see How to create a policy.

The following procedure describes how to create a simple one-to-one data flow. More complex data flows involving one-to-many and cascaded topologies can be constructed by following the same general approach:

Procedure

  1. Click the Data Flows link on the Navigation Sidebar to open the Data Flows Inventory.

  2. Click the Create new item tile to open the Data Flow Wizard.

  3. Enter a Name and Description for the data flow, then click Next.

    The next wizard page is displayed with a blank workspace.
  4. Drag a source node from the Nodes or Node Groups list onto the data flow workspace.

    The node is displayed on the workspace with a grey box around it showing it is selected. The available Policies appear next to the workspace. If a policy contains operations that can be performed locally to the node, without the need for a separate destination node (e.g. local snapshot operations), then these will be displayed directly below the policy.
  5. Select the policies and/or local operations that you want to assign to the source node by checking all those that apply in the Policies listed to the right of the workspace.

    • If a policy is selected that requires a destination node and corresponding operation assignment to complete it, then a warning triangle icon GUID-533055E2-4720-4869-B9E3-E31DD8F336D0-low.png will appear next to the source node, indicating that the node has an incomplete policy assigned to it. Completing the policy assignment is described in the steps that follow.
    • If an operation is selected, then an operation properties dialog will be displayed. Enter the required operation properties in the dialog and click OK.

      If you choose not to define the operation properties now (by clicking Cancel), they can be configured later by clicking the Configure Operation Properties button displayed below the respective operation in the Policies area to the right of the data flow workspace.

      After the operation properties have been applied, they can be edited by clicking the Edit Operation button in the operation summary box next to the operation name.

      If a node has a snapshot operation assigned to it, a snapshot icon GUID-0BC0CC66-7F27-4E79-BE6D-B1796138FF4E-low.png will appear in the bottom right corner of the node.

  6. Now, place and connect the destination node. There are two methods for connecting nodes:

    • Drag the destination node from the Nodes list, passing over the source node that you want it to connect to, then drop the destination node where you want to place it. A connection is created between the destination and source node. Now select the destination node.
    • Place the destination node on the workspace then refer to How to connect nodes on a data flow.
    The destination node is displayed on the workspace with a grey box around it showing it is selected. The available Policies appear to the right of the workspace. If a policy contains operations that can be performed by the destination node (e.g. remote replication operations), then these will be displayed below the policy.
  7. With the destination node selected, choose the operations that you want to assign to it by checking all those that apply in the Policies area to the right of the workspace. Note that the policy checkbox cannot be selected by the user. When an operation is selected, an operation properties dialog will be displayed. Enter the required properties in the dialog and click OK.

    If an operation is selected that completes a policy previously selected on the source node, then the warning triangle icon will be removed from the source node, indicating that the node now has a completed policy assigned to it. The image below shows a source node with a remote policy assigned (myReplication) that is completely specified (the operation Mirror (Replicate) has been assigned to the destination node). A local operation (mySnapshot) has also been assigned to the source node.

    Source node with a remote policy assigned (completed assignment)
    GUID-2EC087E5-80B3-464B-8ADB-4BD4242BC03B-low.png

  8. Select the connection between the source and destination nodes to display the Routed Policies and Mover Settings to the right of the workspace.

    1. The Routed Policies area lists the policies being routed along the selected connector.

    2. Select the Type of mover to be used in the Mover Settings.

      NoteThe Data Flow Wizard only prevents some incorrect mover and operation combinations from being constructed. The Rules Compiler will however generate warnings or errors for incorrect combinations. Ensure the correct mover type is used with a given operation when creating data flows.
    3. Optionally enter a Label for the connection.

    4. Turn network compression on or off with Enable network data compression.

    5. For Host Based policies only, click Bandwidth Settings to open the Mover Bandwidth Settings Dialog, then set the times and days for Default Speed, High Speed and Low Speed network utilization by clicking the required cells.

  9. When you have finished drawing the data flow and assigning policies, click Finish.

How to connect nodes on a data flow

Before you begin

Create a data flow as described in How to create a data flow.

Nodes can be connected on a data flow as follows:

Procedure

  1. Drop the two nodes that are to be connected on the data flow canvas.

  2. Select the node where data will flow from.

  3. Click the Connect To button in the top left of the canvas.

    A dashed line will appear connected to the selected node at one end and the mouse cursor at the other.
  4. Move the mouse cursor to the node where data will flow to and click to connect the two nodes.

    A line will be drawn from the first node, to the second node with an arrowhead indicating the direction of data flow.
  5. If the mover is not already selected, click on it to view and set the Routed Policies and Mover Settings.

How to apply a policy to nodes on a data flow

Before you begin

Create a data flow as described in How to create a data flow.

Policies are applied to nodes on a data flow as follows:

Procedure

  1. Select the source node on the data flow canvas.

  2. In the Policies area to the right of the canvas select each policy that needs to be applied to the source node.

  3. Click on the mover that routes the policy to the destination node to view the Routed Policies.

  4. Select the destination node on the data flow canvas.

  5. In the Policies area to the right of the canvas select each operation that needs to be applied to the destination node.

    Only the individual operations can be applied; not the policy.An Operation Properties dialog appropriate to the destination node and operation type will be displayed. For example the Hitachi Block Replication Configuration Wizard is displayed when applying a Replication operation to a Hitachi Block node.
  6. Configure the operation properties as required then click OK.

  7. Finally click Finish to progress to the next page of the Data Flow Wizard.

How to activate a data flow

Before you begin

Ensure the data flows that you want to compile have been correctly defined (see How to create a data flow), that the required polices have been assigned and that no significant warning icons1 are displayed on nodes in the data flow diagrams.

1. There is no reason why all policy operations must be applied in all cases. Warning icons may therfore be present, but they may indicate a warning, not an error.

To compile a data flow and activate the resulting rules:

Procedure

  1. Click the Data Flows link on the Navigation Sidebar to open the Data Flows Inventory.

  2. Select the data flows that are to be compiled by clicking in the selection icon in top left corner of the corresponding tiles.

    Although it is possible to compile multiple data flows in one go, it may be easier to initially compile one at a time and rectify any compilation errors, before compiling all data flows and distributing rules in one operation.
    NoteActivate data flows in batches not exceeding 20 data flows at a time. Activating more than this simultaneously can result in longer activation times.
  3. Click Activate above the inventory.

    The Activate Data Flow Dialog is displayed and the selected data flow(s) start compiling. After a short time the results of the compilation process are displayed with a message indicating that the compilation process succeeded or failed.
  4. If the compilation succeeds then click Activate to update the rules on the affected nodes.

  5. If the compilation fails then the Activate button will remain disabled. Examine the compiler output to locate the cause of the failure, rectify the data flow and/or policy then recompile.

 

  • Was this article helpful?