Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

About cloud accounts and destinations

To use Data Migrator to Cloud, you must first configure at least one account that contains the following information:

  • The cloud provider, currently either Hitachi Content Platform (HCP), Hitachi Content Platform (HCP S3), Amazon S3, S3 Cloud Object Storage or Microsoft Azure.
  • The credentials of a user with read/write permissions to the target.
  • A destination, which is a location on the cloud where migrated files will be stored. This destination must exist before using Data Migrator to Cloud. The configuration of the destination will fail if the specific destination cannot be validated.

    Multiple accounts are supported. Also, note that multiple file system migration policies can use the same account.

Cloud providers

Data Migrator to Cloud supports multiple cloud providers.

The table below lists each cloud provider and the required information when adding a cloud account and destination.

Cloud Account
Provider Server Name Destination Location User Credentials Server Credentials References
HCP Fully qualified domain name of the HCP namespace for the account credentials The folder path. This field can be empty. User name of the Data Access Account The password of the Data Access Account with read/write/delete/purge/search permissions to the user account
HCP (S3) Fully qualified domain name of the HCP namespace for the account credentials The folder path. This field can be empty. User name of the Data Access Account The password of the Data Access Account with read/write/delete/purge/search permissions to the user account
Amazon S3 Auto-populates with aws-amazon.com The bucket with or without a subfolder. Access key Security Credential Key https://console.aws.amazon.com/iam/
Microsoft Azure Auto-populates with azure.microsoft.com The bucket with or without a subfolder. Name of storage account Primary or Secondary Access Key https://azure.microsoft.com
S3 Cloud Object Storage User must provide an endpoint name (for IBM see https://ibm-public-cos.github.io/crs-docs/endpoints) The bucket with or without a subfolder. Access key Security Credential Key

for IBM see https://control.softlayer.com/

Using the Hitachi Content Platform cloud providers

The NAS server provides two types of Hitachi cloud provider.

The options are:

  • Hitachi Content Platform
  • Hitachi Content Platform S3

To ensure optimal configuration, check that:

  • the account contains the fully qualified domain name of the HCP namespace. For HCP S3, the namespace must also have an assigned owner.
  • the user permissions are sufficient. The required Data Access Permissions are for the tenant-level user include Read, Write, Delete, Purge, and Search for the given namespace. Tenant or system administrator privileges are not needed.
  • HTTPS protocol is enabled
  • the HTTP protocol is enabled if (for better performance) encryption-in-transit is not desired
  • the default retention class is disabled

In addition to user permissions and retention class, there are extra attributes to set on an HCP S3 server:

  • the tenant-level user needs an additional ‘Privileged’ Data Access Permission
  • the tenant-level user must be the owner of the namespace
  • ACL needs to be enabled
  • HCP S3 Authenticated access requires the installation of HCP client certificates and HS3 API to be enabled
  • the ‘optimization for cloud protocols only’ setting needs to be enabled
  • MAPI, the Management API needs to be enabled for the tenant

Selecting a Hitachi Content Platform cloud provider

Use HCP S3 if:

  • You are using S node storage, especially if you are using encryption or compression.
  • You are using HCP version 8 or higher

Use HCP if:

  • HCP is not configured to store data on S Node Storage
  • HTTP protocol is used to leverage zero copy feature
  • You are using HCP versions earlier than 8

The main difference between the two providers is the method used for file uploads and downloads:

A Hitachi Content Platform server can store data on S Series storage in both encrypted and un-encrypted formats. When the NAS server requests an encrypted (or compressed) file from S series storage through HCP, it makes HTTP ranged GET requests in 500KB chunks. In order to decompress or decrypt large files, HCP has to read the entire file multiple times. This can impact performance.

The HCP S3 cloud provider uses multi-part upload functionality. This means that each chunk of data is encrypted and stored separately on the S series storage. When the NAS server requests an encrypted or compressed file, HCP only needs to retrieve the relevant chunks. This option increases performance when using encryption or compression on HCP S series storage with Data Migrator to Cloud.

You can select a provider when creating a new cloud account in the NAS Manager. Alternatively, if you already have a Hitachi Content Platform cloud provider configured, you can use the relevant Cloud Account Details page to switch between the two providers. Note that files uploaded with HCP provider and downloaded with HCP S3 cannot leverage the key benefits of the S3 feature.

Establishing credentials for Amazon S3

Before adding an Amazon S3 cloud account on the NAS server, you must create an Identify and Access Management (IAM) account and add an access and a secret key.

Procedure

  1. Navigate to https://console.aws.amazon.com/iam/ and log in with your user name and password. Refer to https://console.aws.amazon.com/iam/ for more information.

  2. When creating a user, generate the access and secret keys. Refer to http://docs.aws.amazon.com/IAM/latest/UserGuide/ ManagingCredentials.html for more information.

  3. Save the access keys to your local machine. You will need this information when you create a cloud account on the NAS server.

  4. Open the page for the newly added IAM user account.

  5. Attach the user policy and select Amazon S3 Full Access (you may have to scroll down the page).

  6. Apply the policy.

  7. When you create an Amazon cloud account on the NAS server, provide the new account details and access and secret keys.

Establishing a Microsoft Azure cloud account

Before adding an Microsoft Azure cloud account on the NAS server, you must create a Microsoft storage account and Primary or Secondary Access Keys.

Procedure

  1. Navigate to https://azure.microsoft.com.

  2. Log in with your user name and password.

  3. Create a new storage account.

  4. Obtain the Primary Access Key and Secondary Access Key for the account. See the Microsoft Azure documentation for details.

  5. When you create an Microsoft Azure cloud account on the NAS server, provide the storage account details and primary access or secondary access keys.

Establishing an S3 Cloud Object Storage account

Before adding an S3 Cloud Object Storage account to the NAS server, you must create an S3 storage account and add access and secret keys. This information is required as part of the NAS server cloud account process.

Note The procedure below is only suitable for IBM Cloud Object Storage.

Procedure

  1. Navigate to https://cloud.ibm.com/ and log in with your username and password.

  2. Create a new storage account and ensure that you create access and secret keys for the user. See the S3 Cloud Object Storage help and documentation for details.

  3. Create a new bucket to use as a cloud destination. See the S3 Cloud Object Storage help and documentation for details.

  4. Store the user, key and bucket details for configuring the NAS server.

Importing a web server certificate

The NAS server provides some industry standard Certificate Authority certificates in its certificate store. You can upload a custom certificate if you have your own Certificate Authority or if you use self-signed server certificates.

The NAS server provides the following commands to manage custom certificates:

  • ca-certificate-show - displays all custom certificates currently installed on the NAS server.
  • ca-certificate-import - adds a custom X.509 certificate (contained in a PEM formatted file) to the NAS server certificate store.
  • ca-certificate-delete - removes a custom certificate from the NAS server certificate store.
  • ca-certificate-delete-all - removes all custom certificates from the NAS server certificate store.

See the command man pages for further details.

HCP certificates can be downloaded from the HCP System Management Console. See the HCP documentation for details.

Importing a certificate

Procedure

  1. Save the certificate to your local machine.

  2. Open a command prompt.

  3. Enter the following command:

    ssc <NAS server IP address> ca-certificate-import --path <path to certificate on local machine> If the path name contains a character which has special meaning to the CLI (for example, an embedded space), put the path in quotes (").

 

  • Was this article helpful?