Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Cloud data migration and replication considerations

The following lists important data migration and replication considerations.

Amazon and file-based replication

You may decide to deploy a replicated environment to protect primary and archived data against site-wide failures. When using file replication in conjunction with HCP replication, special configuration is required. The special configuration depends on the HNAS and HCP replication scenario.

NoteIn order to take advantage of the new enhancements to HCP as a target, you need to recall all the data and then re-setup your schedules and policies, using the new Data Migrator to Cloud.

Consider the following three scenarios when using Data Migrator to Cloud to HCP along with file replication and HCP replication:

GUID-8206C4F0-058A-40BF-9D2A-7C3DEB2514F7-low.png

CautionCare should be taken when configuring systems with a single migration destination for both replication source and target (known as a triangular arrangement). Such arrangements should not be considered a valid solution in any disaster recovery (DR) or backup scenario, as there is only a single copy of the user data pointed to by XVLs at each end of the replication policy.
Scenario 1 Illustrates replicating file systems between clusters, both of which point to a single HCP system, presumably hosted elsewhere; however, it is possible that the primary system and HCP system are in the same location.
CautionIn this scenario, both clusters/entities map to the same HCP system. With file replication it is possible to access the secondary file system(s) at any time. It is strongly recommended to keep the destination file system syslocked to avoid unintentional deletion of data on the HCP system.
Scenario 2 Illustrates replicating file systems between clusters, where each cluster points to a local HCP system. The HCP systems replicate migrated data and also perform a DNS failover so that the secondary HCP maintains the same name resolution as the primary system.
NoteIn this scenario, HCP uses a DNS failover capability. Due to the way the HCP failover functionality operations, the secondary will also point to the primary HCP. With file replication it is possible to access the secondary file system(s) at any time. It is strongly recommended to keep the destination file system syslocked to avoid unintentional deletion of data on the HCP system.
Scenario 3 Illustrates replicating file systems between clusters, where each cluster points to a local HCP system. The HCP systems replicate migrated data and maintain their own unique name resolution.

Scenario 3

For scenario 3, the cloud account must be configured as follows:

  1. Create a "dummy" namespace on the secondary HCP system with the same namespace and tenant name as the primary system. The HCP system and the domain will then be different.
  2. Create a namespace data access user with read-write permissions on the "dummy" namespace.
  3. Configure a cloud account to this namespace, which will confirm the read-write permissions.
  4. Remove the namespace and then configure replication in HCP to create a replica namespace on the secondary system. Because a replica is read-only until a failover, the read-write permissions check performed by the cloud account creation command will fail unless this "dummy" is created.

Scenario 1 and 2

For scenarios 1 and 2 the cloud account creation command must specify the namespace and data access account of the primary HCP system.

All Scenarios

For all scenarios, the cloud destination must be configured as follows:

  1. The destination path and UUID must be the same at the secondary and the primary because the stub contents will be replicated between clusters and the stub contains the path UUID of the destination. If the path and UUID changes between clusters, Data Migrator to Cloud cannot locate migrated files after a failover.
  2. Identify the UUID of the cloud destination object in the primary file system. This can be performed using the BOS CLI with the following command:
    • migration-cloud-destination-list <destination-name>
      • "Destination ID" is the UUID of this destination
      • "Path at destination" is the path
  3. On the secondary file system, configure the cloud destination object using the BOS CLI (not the SMU), specifying the UUID with the -u option, For example:
    • migration-cloud-destination-create <destination_name> -a <account_name> -p <path at destination> -t yes -u <UUID (obtained above)
    • The -p option should specify the path that was created at the primary.
    • The -u option is the UUID of the destination at the primary

Cloud Objects

All other cloud objects (Data Migration paths, rules, policies, and schedules) are configured the same as in a non-replicated environment.

  • Data migration paths are not copied by file-based replication. As with Data Migrator, the XVLs will work correctly only if the cloud path exists on the replication target. The path must be created prior to the start of replication.
  • Data Migrator policies and schedules are not copied with file-based replication. You must manually re-create them on the replication target to support continuing migration to the cloud.
  • For the cloud, you must create the replication rule (navigate to Home Data Protection File Replication Rules), using the values below instead of the default settings. This ensures that replication copies the migration links and allows access to the migrated data. Make sure the replication rule is correctly specified in the replication policy.
    • Migrated File Remigration = Enabled
    • External Migration Links = Re-create link

      See the Replication and Disaster Recovery Administration Guide for more information.

Finally, to preserve bandwidth when replicating data between HNAS systems, instruct file replication to only migrate the stubs and not the actual data, which will be replicated by HCP itself. To do this, perform the following steps:

  • When creating a file system replication rule, set the "External Migration Links" setting to "re-create links." On the BOS CLI, run the following commands:
    • evssel <evs number of the file system>
    • migration-recreate-links-mode always-recreate-links

Multi-site HCP and file-based replication

  • The same considerations as described in the Amazon and file-based replication apply to multi-site HCP and file-based replication.
  • The replication of the migrated data HCP -> HCP must be performed by HCP. It is recommended that the server name and credentials be the same for both the source and the target. If this is not possible, it can be done at the cloud account and destination level.

The path as replicated will point to the original cloud destination, and can be redefined if a different destination is desired. Data migration to the cloud will not begin until after disaster recovery occurs.

CautionIf both the source and destination point to the same HCP, the destination file system should be syslocked to prevent unintentional deletion of data.

Object-based replication

When using object replication, the default behaviour is to 'rehydrate' data that has been migrated using either external migration or the Data Migrator to Cloud (DM2C) feature. Files that have been converted to External Volume Links (XVLs) are copied in full to the replication target.

The NAS server is also able to copy XVLs as links without having to re-inflate them at the destination target. The configuration of this functionality is discussed in detail in the Replication and Disaster Recovery Admin guide section "Transferring XVLs as links during object replication".

NDMP backup

Hitachi NAS NDMP offers several variables to control how migrated (tiered) data is handled during backup and restore. These variables can typically be controlled through the backup application, and the way in which they are called is specific to each backup platform. For example, In NetBackup, environment variables can be set within the backup selections list by specifying one or more SET directives in a stanza. Consult the documentation of the Backup application for specific guidance.

There are two main NDMP variables that control behavior of migrated files:

  • NDMP_BLUEARC_EXCLUDE_MIGRATED: Controls how an NDMP backup interacts with CVL (files that have been migrated internally, for example, from SAS to NL-SAS). The Valid values are y or n. If set to y , the backup or copy will not include files whose data has been migrated to another volume. The default setting is n meaning that migrated files and their data will be backed up as normal files. The backup/copy retains the information that these files had originally been migrated.
  • NDMP_BLUEARC_EXTERNAL_LINKS: Controls how an NDMP Backup interacts with XVLs (files that have been migrated to an external storage tier / cloud provider). The valid value are remigrate, ignore and recreate_link.
    • If set to remigrate, externally migrated files and their data will be backed up as normal files. On recovery the file will be restored and then an attempt will be made to remigrate the file to external storage again.
    • If set to ignore, the backup or copy will not include files whose data has been migrated externally.
    • If set to recreate_link , the backup or copy will include details of the link but none of the data contents. On recovery an attempt will be made to recreate the link to an existing file on the external storage system.

For platforms such as TSM that cannot directly manipulate NDMP variables, the CLI ndmp-option command backup_ignore_external_links option exists to allow the backup platform to ignore files migrate to external storage tiers.

For further details please consult the NDMP Backup Administrator Guide.

Note If the xvl-auto-recall-on-read environment variable is enabled, an NDMP job will not cause the migrated files to be recalled.

Virtual Server Security

The Virtual Secure Servers feature is compatible with Data Migrator to Cloud, provided the following requirements are met:

  • A cloud target can be resolved in a DNS server configured in Global Context.
  • A route from the aggregate ports to the cloud provider server (HCP, HCP S3, AmazonS3, S3 Cloud Object Storage or Azure) exists on all nodes.

Multi-tenancy

Multi-tenancy is not supported with Data Migrator to Cloud.

Other configurations

Other configurations may be possible. If your environment differs from the scenarios described above, contact customer support or your Global Solutions and Services representative.

 

  • Was this article helpful?