Skip to main content
Outside service Partner
Hitachi Vantara Knowledge

Storage for HCP systems


An HCP system includes multiple nodes that are networked together, where each node is either an individual server, a blade in a blade server, or a virtual machine. Each physical node can have multiple internal drives and/or can connect to SAN storage. Each virtual node emulates a server that has only internal drives.

The physical storage that’s managed by the nodes in the HCP system is called primary storage. By default, primary storage consists entirely of running storage, which is storage on continuously spinning disks. However, an HCP SAIN system can be configured to use SAN storage that includes both running storage and spindown storage, which is storage on disks that can be spun up or spun down as needed. If primary spindown storage is enabled on an HCP SAIN system, you can configure HCP to use that storage for tiering purposes.

You can also add S Series Nodes to your HCP system. These nodes can be used as an alternative to primary running storage or for tiering purposes. The number of objects you can write to an S Series Node is limited to the total storage capacity of the node. If you want more storage capacity, you need to purchase more S Series Nodes. The HCP system communicates with the S Series Nodes through the S3 compatible and management APIs.

You can also configure any HCP system to use extended storage, which is additional storage that’s managed by devices outside of the HCP system, for tiering purposes.

You can configure HCP to access and store object data on these different types of extended storage:

NFS — Volumes that are accessed on physical storage devices using NFS mount points

Amazon S3 — Cloud storage that’s accessed using an Amazon Web Services user account

Google Cloud — Cloud storage that’s accessed using a Google Cloud Platform user account

Microsoft Azure — Cloud storage that’s accessed using a Microsoft Azure user account

S3 compatible — Any physical storage device or cloud storage service that’s accessed using a protocol that’s compatible with the Amazon S3 access protocol

ImportantWebHelp.png

Important: Extended storage is intended to increase the amount of storage that’s available to HCP. Extended storage does not function as backup storage. You should secure, back up, and monitor the health and availability of each extended storage device and cloud storage service that you use to store data in an HCP repository.

Unless you are writing directly to S Series storage, see Choose the ingest tier, HCP initially stores each object in a repository on primary running storage. By default, throughout the lifecycle of an object, HCP continues to store all copies of the data and metadata for that object only on primary running storage. However, you can configure HCP to offload object content from primary running storage and store that content on primary spindown storage (if it’s available), on S Series storage, or on any of the supported types of extended storage that you have configured HCP to access and use.

Using primary spindown storage to store object content that’s accessed infrequently saves energy, thereby reducing the cost of storage.

NoteWebHelp.png

Notes: 

While all copies of the data, custom metadata, ACL, and secondary metadata for an object can be moved onto primary spindown storage, all copies of the primary metadata for an object must always remain on primary running storage.

For information about how HCP creates, manages, and uses copies of the data, primary metadata, secondary metadata, and ACL for each object in an HCP repository, see Metadata storage.

While all the data for an object can be written directly to S Series storage or later moved off primary running storage and stored on S Series or extended storage, at least one copy of the system metadata, custom metadata, and ACL for that object must always remain on primary running storage.

HCP moves object content from primary running storage onto one or more other types of storage according to rules specified in service plans.

Each namespace has a service plan that defines one or more tiers of storage that can be used to store objects in that namespace. For each object in a given namespace, at any given point in the object lifecycle, the service plan specifies the criteria that determine which storage tiers must be used to store copies of that object and the number of copies of that object that must be stored on each tier.

You can set the HCP initial storage tier, called the ingest tier, to either primary running storage or to S Series storage. When a service plan is first created, it defines only the ingest tier, so HCP stores all objects in a given namespace on the designated ingest tier throughout the entire object lifecycle.

Primary running storage is designed to provide both high data availability and high performance for object data storage and retrieval operations. To optimize data storage price/performance for the objects in a namespace, you can configure the service plan for that namespace to define a storage tiering strategy that specifies multiple storage tiers.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

About storage components


HCP uses storage components to represent the storage available to it for storing objects. A storage component is a set of one or more access points (that is, buckets, containers, or mount points) that share a common end point. An end point is either a specific type of HCP storage (that is, primary running or primary spindown) or a specific externally addressable storage device or storage service such as an S Series Node or Amazon S3. Each storage component has a specific set of data availability, price, and performance characteristics.

HCP uses storage components to provide you with an interface to:

Configure HCP to provide the information that HCP needs to use to access specific extended storage devices and cloud storage service endpoints.

Configure HCP to monitor, manage, and use all of the storage that’s represented by one or more storage components of the same type as a single storage pool.

Monitor the health, availability, capacity, and usage of the storage that’s represented by each component (primary running storage, primary spindown storage, S Series storage, and each extended storage device and cloud service endpoint), and appropriately provision storage.

Retire S Series or extended storage that is represented by a single storage component or retire all of the storage that’s represented by the components in a single storage pool.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Primary storage components


Every HCP system has a pre-configured storage component for each type of primary storage that the system is configured to use. The primary running storage component represents all continuously spinning disks that are currently managed by the HCP nodes. For SAIN systems with spindown storage, the primary spindown storage component represents all spindown-capable disks that are currently managed by the HCP nodes.

In the System Management Console, you can use the Storage page to view information about the current hardware configuration, health status, availability, capacity, and usage of the storage that’s represented by each primary storage component.

You can also use the Storage page to view current and historical storage usage statistics for each individual primary storage component, and you can view a comparison of the current and historical storage capacity usage statistics for all components that are defined on the HCP system, including the two primary storage components.

You can use the information that you can view on the Storage page to monitor the health and usage of each type of primary storage and determine when you need to add primary storage to an HCP system or replace storage devices that are used for primary running storage or primary spindown storage.

NoteWebHelp.png

Note:  You cannot modify a primary storage component in order to add, retire, or upgrade primary running storage or primary spindown storage. To add primary storage to an HCP system either to increase primary storage capacity or to replace one or more retired storage devices, contact your authorized HCP service provider for help.

On a RAIN or SAIN system, you can use either the Storage page or the Migration page in the System Management Console to retire one or more primary storage devices. When you retire primary storage, HCP automatically updates each primary storage component as necessary to reflect the changes in the total storage capacity and in the total number of disks represented by each primary storage component.

For more information on retiring primary storage using the Storage page, see Retiring primary storage devices. For more information on retiring primary storage using the Migration page, see Migration service.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

HCP S Series storage components


HCP can use HCP S Series Nodes as an alternative to primary storage or for tiering purposes. To connect the S Series Nodes to the HCP system, you need to add them to the HCP system on the Hardware page of the System Management Console. For more information on adding S Series Nodes, see Creating an S Series storage component.

HCP can be configured to write directly to S Series Nodes by changing the ingest tier of a service plan to S Series storage. This does not remove the need for primary storage. Primary running storage still keeps all custom metadata of objects tiered to S Series storage. For more information on writing directly to S Series storage, see Choose the ingest tier.

Once an S Series storage component had been added, you can use the Storage page to view information about the storage component’s health status, availability, and capacity. You can also use the Storage page to view current and historical storage usage statistics for each individual S Series storage component.

S Series Nodes have a storage capacity limit. You can use the Storage page to monitor the health and usage of S Series storage components and determine when you need to add more S Series Nodes to the HCP system.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Extended storage components


HCP supports the use of the following types of extended storage: Amazon S3, Google Cloud, Microsoft Azure, S3 compatible, and NFS storage. To enable HCP to use a specific type of extended storage, you need to create and configure one or more storage components of the applicable type.

For each storage component you create, you need to specify the name of the component, the type of extended storage that’s represented by the component, and the information that HCP needs to use to access that storage.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Amazon S3 storage components


Each Amazon S3 component represents a single endpoint that’s used to access cloud storage using one or more Amazon S3 Web Services user accounts.

To enable HCP to access the storage that’s represented by an Amazon S3 storage component, when you create that component, you specify the following information:

The component name.

Optionally, a description of the component.

Optionally, the network you want HCP to use for communication with storage component. This field is only visible if Virtual network management is enabled. For more information on selecting a network, see Isolating networks for storage tiering.

Whether you want HCP to use the default endpoint, s3.amazonaws.com, to connect to Amazon S3 Web Services, and if not, the fully qualified domain name (FQDN) of the endpoint that you want HCP to use instead of the default.

Optionally, any of these advanced configuration settings:

oWhether you want HCP to use HTTPS to access the endpoint, and if so, the HTTPS port you want to use to connect to the endpoint (default is 443)

oThe HTTP port you want to use to connect to the endpoint (default is 80)

oWhether you want to use a proxy server to connect to the endpoint, and if so, the following information about the proxy server:

The hostname or IP address of the proxy server

The port number you want to use to connect to the proxy server (default is 0)

The username, password, and AD domain of the user account that HCP needs to use to access the proxy server

oWhether you want HCP to use path-style URLs to access the storage that’s represented by the storage component, and if so, the region that includes the Amazon S3 Web Services datacenter that hosts the storage that’s represented by this component

NoteWebHelp.png

Note: If you select this option, you need to specify a region-specific endpoint instead of using the default endpoint.

oThe region that includes the Amazon S3 Web Services datacenter that hosts the storage that’s represented by this component (default is us-east-1)

NoteWebHelp.png

Note: For faster access to storage located in a particular region, you should specify a region-specific endpoint instead of using the default endpoint.

oWhether the extended storage component to supports S3 metadata on objects. Please contact your service provider if you are unsure whether S3 metadata is supported.

oIn the Max metadata size field, type the maximum size (in bytes) of the S3 metadata that will be attached to objects tiered to the storage component. Each extended storage service provider permits a different maximum size. Please contact your service provider to learn the maximum size.

Whether the storage that’s represented by this component is considered to be compliant.

The account label that you want to associate with the initial Amazon S3 Web Services user account that you want HCP to use to access the storage that’s represented by the component. In the System Management Console, HCP uses the account label to represent the user account with the specified credentials.

The authentication type you want to use to authenticate all requests sent from HCP to the storage component.

The access key and secret key for the Amazon S3 Web Services user account that you want HCP to use to access the storage that’s represented by the component.

NoteWebHelp.png

Note: Once you create an Amazon S3 storage component, you can modify it to specify credentials for one or more additional user accounts. For details on this, see Configuring a new user account for access to an extended storage endpoint.

If you are using AWS STS or CAP authentication, the authentication endpoint text field appears. This is the endpoint to which you send your credentials in order to generate an AWS STS authentication token.

If you are using CAP authentication, the authentication port field appears. Enter the port of your CAP endpoint.

If you are using CAP authentication, the authentication certificate drop down menu appears. This lets you select the account certificate which connects HCP to the CAP authentication endpoint. In order to see the account certificate in the dropdown field, it must already exist in the HCP system. To upload an account certificate, see Uploading an account certificate for CAP authentication

Uploading an account certificate for CAP authentication

Optionally, any custom request headers that you want HCP to include in the access request URLs that are sent to Amazon S3 Web Services to request read or write access to the storage associated with the specified user account.

Whether you want to access existing buckets associated with the specified user account, and if so, the name of each existing bucket you want to access.

NoteWebHelp.png

Notes: 

At any given time, a bucket can be associated with only one storage component.

You can add an existing bucket to an Amazon S3 storage component only if that bucket is empty or has only HCP data in it.

Whether you want to create any new buckets for the specified user account, and if so, the name of each new bucket you want to create.

NoteWebHelp.png

Note: By default, the Add Component wizard displays a list of the existing buckets that HCP is able to access using the specified user account credentials, but the wizard does not display the controls required to create a new bucket. To create a new bucket, you need to click on Bucket Actions, then select Create new from the dropdown list, then specify the name of the bucket you want to create.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Google Cloud storage components


Each Google Cloud component represents a single endpoint that’s used to access cloud storage using one or more Google Cloud Platform user accounts.

To enable HCP to access the storage that’s represented by a Google Cloud storage component, when you create that component, you specify the following information:

The component name.

Optionally, a description of the component.

Optionally, the network you want communicating with the storage component. This field is only visible if Virtual network management is enabled. For more information on selecting a network, see Isolating networks for storage tiering.

Whether you want HCP to use the default endpoint, storage.googleapis.com, to connect to Google Cloud Platform, and if not, the fully qualified domain name (FQDN) of the endpoint that you want HCP to use instead of the default.

Optionally, any of these advanced configuration settings:

oWhether you want HCP to use HTTPS to access the endpoint, and if so, the HTTPS port you want to use to connect to the endpoint (default is 443)

oThe HTTP port you want to use to connect to the endpoint (default is 80)

oWhether you want to use a proxy server to connect to the endpoint, and if so, the following information about the proxy server:

The hostname or IP address of the proxy server

The port number you want to use to connect to the proxy server (default is 0)

The username, password, and AD domain of the user account that HCP needs to use to access the proxy server

oWhether you want HCP to use path-style URLs to access the storage that’s represented by the storage component

oWhether the extended storage component to supports S3 metadata on objects. Please contact your service provider if you are unsure whether S3 metadata is supported.

oIn the Max metadata size field, type the maximum size (in bytes) of the S3 metadata that will be attached to objects tiered to the storage component. Each extended storage service provider permits a different maximum size. Please contact your service provider to learn the maximum size.

Whether the storage that’s represented by this component is considered to be compliant.

The account label that you want to associate with the initial Google Cloud Platform user account that you want HCP to use to access the storage that’s represented by the component. In the System Management Console, HCP uses the account label to represent the user account with the specified credentials.

The access key and secret key for the Google Cloud Platform user account that you want HCP to use to access the storage that’s represented by the component.

NoteWebHelp.png

Note: Once you create a Google Cloud storage component, you can modify it to specify credentials for one or more additional user accounts. For details on this, see Configuring a new user account for access to an extended storage endpoint.

Optionally, any custom request headers that you want HCP to include in the access request URLs that are sent to Google Cloud Platform to request read or write access to the storage associated with the specified user account.

Whether you want to access existing buckets associated with the specified user account, and if so, the name of each existing bucket you want to access.

NoteWebHelp.png

Notes: 

At any given time, a bucket can be associated with only one storage component.

You can add an existing bucket to a Google Cloud storage component only if that bucket is empty or has only HCP data in it.

Whether you want to create any new buckets for the specified user account, and if so, the name of each new bucket you want to create.

NoteWebHelp.png

Note: By default, the Add Component wizard displays a list of the existing buckets that HCP is able to access using the specified user account credentials, but the wizard does not display the controls required to create a new bucket. To create a new bucket, you need to click on Bucket Actions, then select Create new from the dropdown list, then specify the name of the bucket you want to create.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Microsoft Azure storage components


Each Microsoft Azure component represents a single endpoint that’s used to access cloud storage using one or more Microsoft Azure user accounts.

To enable HCP to access the storage that’s represented by a Microsoft Azure storage component, when you create that component, you specify the following information:

The component name.

Optionally, a description of the component.

Optionally, the network you want communicating with the storage component. This field is only visible if Virtual network management is enabled. For more information on selecting a network, see Isolating networks for storage tiering.

Whether you want HCP to use the default endpoint, blob.core.windows.net, to connect to Windows Azure, and if not, the fully qualified domain name (FQDN) of the endpoint that you want HCP to use instead of the default.

Optionally, any of these advanced configuration settings:

oYou can display an advanced option that lets you specify whether to connect to Microsoft Azure using HTTPS. This option is enabled by default.

oWhether you want to use a proxy server to connect to the endpoint, and if so, the following information about the proxy server:

The hostname or IP address of the proxy server

The port number you want to use to connect to the proxy server (default is 0)

oWhether the extended storage component to supports S3 metadata on objects. Please contact your service provider if you are unsure whether S3 metadata is supported.

oIn the Max metadata size field, type the maximum size (in bytes) of the S3 metadata that will be attached to objects tiered to the storage component. Each extended storage service provider permits a different maximum size. Please contact your service provider to learn the maximum size.

Whether the storage that’s represented by this component is considered to be compliant.

The account label that you want to associate with the initial Microsoft Azure user account that you want HCP to use to access the storage that’s represented by the component. In the System Management Console, HCP uses the account label to represent the user account with the specified credentials.

The access key and secret key for the Microsoft Azure user account that you want HCP to use to access the storage that’s represented by the component.

NoteWebHelp.png

Note: Once you create a Microsoft Azure storage component, you can modify it to specify credentials for one or more additional user accounts. For details on this, see Configuring a new user account for access to an extended storage endpoint.

Optionally, any custom request headers that you want HCP to include in the access request URLs that are sent to Microsoft Azure to request read or write access to the storage associated with the specified user account.

Whether you want to access existing containers associated with the specified user account, and if so, the name of each existing container you want to access.

NoteWebHelp.png

Notes: 

At any given time, a container can be associated with only one storage component.

You can add an existing container to a Microsoft Azure storage component only if that container is empty or has only HCP data in it.

Whether you want to create any new containers for the specified user account, and if so, the name of each new container you want to create.

NoteWebHelp.png

Note: By default, the Add Component wizard displays a list of the existing containers that HCP is able to access using the specified user account credentials, but the wizard does not display the controls required to create a new container. To create a new container, you need to click on Container Actions, then select Create new from the dropdown list, then specify the name of the container you want to create.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

S3 compatible storage components


Each S3 compatible component represents a single physical storage device or cloud storage service that’s used to access storage using a protocol that’s compatible with the Amazon S3 access protocol.

To enable HCP to access the storage that’s represented by an S3 compatible storage component, when you create that component, you specify the following information:

Optionally, the network you want communicating with the storage component. This field is only visible if Virtual network management is enabled. For more information on selecting a network, see Isolating networks for storage tiering.

The endpoint that HCP needs to use to access the physical device or cloud storage service that manages the storage that’s represented by this component.

Optionally, any of these advanced configuration settings:

oWhether you want HCP to use HTTPS to access the endpoint, and if so, the HTTPS port you want to use to connect to the endpoint (default is 443)

oThe HTTP port you want to use to connect to the endpoint (default is 80)

oWhether you want to use a proxy server to connect to the endpoint, and if so, the following information about the proxy server:

The hostname or IP address of the proxy server

The port number you want to use to connect to the proxy server (default is 0)

The username, password, and AD domain of the user account that HCP needs to use to access the proxy server

oWhether you want HCP to use path-style URLs to access the storage that’s represented by the storage component

oWhether the extended storage component to supports S3 metadata on objects. Please contact your service provider if you are unsure whether S3 metadata is supported.

oIn the Max metadata size field, type the maximum size (in bytes) of the S3 metadata that will be attached to objects tiered to the storage component. Each extended storage service provider permits a different maximum size. Please contact your service provider to learn the maximum size.

Whether the storage that’s represented by this component is considered to be compliant.

The account label that you want to associate with the initial user account that you want HCP to use to access the storage that’s represented by the component. In the System Management Console, HCP uses the account label to represent the user account with the specified credentials.

The authentication type you want to use to authenticate all requests sent from HCP to the storage component.

The access key and secret key for the user account that you want HCP to use to access the storage that’s represented by the component.

NoteWebHelp.png

Note: Once you create an S3 compatible storage component, you can modify it to specify credentials for one or more additional user accounts. For details on this, see Configuring a new user account for access to an extended storage endpoint.

If you are using AWS STS or AWS STS V4 authentication, the authentication endpoint text field appears. This is the endpoint to which you send your credentials, in order to generate an authentication token.

Optionally, any custom request headers that you want HCP to include in the access request URLs that are sent to the target storage device or cloud service to request read or write access to the storage associated with the specified user account.

Whether you want to access existing buckets associated with the specified user account, and if so, the name of each existing bucket you want to access.

NoteWebHelp.png

Notes: 

At any given time, a bucket can be associated with only one storage component.

You can add an existing bucket to an S3 compatible storage component only if that bucket is empty or has only HCP data in it.

Whether you want to create any new buckets for the specified user account, and if so, the name of each new bucket you want to create.

NoteWebHelp.png

Note: By default, the Add Component wizard displays a list of the existing buckets that HCP is able to access using the specified user account credentials, but the wizard does not display the controls required to create a new bucket. To create a new bucket, you need to click on Bucket Actions, then select Create new from the dropdown list, then specify the name of the bucket you want to create.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

NFS storage components


Each NFS storage component represents a single physical storage device on which one or more volumes are accessed using NFS mount points.

NoteWebHelp.png

Notes: 

When you create an NFS storage component, you provide HCP with the information that it needs to create an NFS mount point for each volume that you want to access on the device that’s represented by the NFS storage component. However, HCP creates an NFS mount point that’s associated with a given storage component only when that mount point is added to an NFS storage pool. For information on adding an NFS mount point to a storage pool, see Adding access points to an extended storage pool.

When an HCP system is upgraded from release 6.x to release 7.0 or later, HCP automatically creates an NFS storage component and an NFS storage pool (see NFS storage pools) for each external volume that was configured on the HCP system before it was upgraded, and defines each NFS storage pool as a storage tier. For each namespace that was configured to use NFS storage before the upgrade, HCP automatically configures the service plan for that namespace to define the applicable NFS storage pool as a storage tier.

On the Hardware page of the System Management Console, HCP uses an external volume (also called an NFS volume) to represent the storage that’s accessed using a single NFS mount point that’s contained in an NFS storage pool (see NFS storage pools). You can use the Storage page to view information about all NFS volumes stored on a single physical storage device that’s represented by an NFS storage component.

Before you can create an NFS storage component, you need to create and configure the NFS shares for the volumes you want to access on the physical storage device that’s represented by the component.

The main steps for creating NFS shares on a physical storage device for which you want to create an NFS storage component are:

1.On the physical storage device, create the directories you want to share (see "Directories for export" below).

2.Export each directory as an NFS share (see "Exported shares" below).

Directories for export

For each storage volume you want to access on the physical storage device that’s represented by an NFS storage component, you need to create a directory on that physical storage device. For each directory, you need to set the permissions to allow read, write, and execute access to all users.

For example, on Linux systems, each directory you want to share must have its permissions set to 777.

Exported shares

Each directory that you want to mount as an NFS volume on an HCP node must be exported as an NFS share on the physical storage device that’s represented by the NFS storage component you want to create. To ensure that other systems and applications cannot mount the same storage, you should export each share exclusively to the HCP system. You can identify each the HCP system in one of three ways:

Using the fully-qualified domain name (FQDN) of the domain that’s associated with the [hcp_system] network, preceded by admin (for example, admin.hcp.example.com). This option is available only if the HCP system is using DNS.

By the CIDR notation for any IPv4 or IPv6 gateway that’s defined for the [hcp_system] network.

By the node IP addresses that the extended storage device needs to use to communicate with the [hcp_system] network. In this case, you need to export the share to the applicable IPv4 or IPv6 addresses for all the HCP nodes.

You need to export each share to all nodes because you cannot predict with which node HCP will associate the NFS storage volume you create for a share. If you omit a node and HCP associates a volume with that node, HCP has no access to the share for that volume.

NoteWebHelp.png

Note: If you use node IP addresses to identify the HCP system and you subsequently change any of those IP addresses in the [hcp_system] network, you need to update the export specification for the share with the new addresses. Then you need to export the share again.

The method you use to export the shares and the export options you specify depend on the type of storage device for which you want to create an NFS storage component. Minimally, the exported share must allow read and write access by the HCP system.

NoteWebHelp.png

Note: The following information on exporting shares on Linux systems is included for explanatory purposes only. The extended storage devices that are represented by NFS storage components should be enterprise-class, purpose-built appliances. The storage volumes that are accessed using NFS mount points should be on storage that’s RAID-protected, secure, and monitored closely for its health.

The extended devices that are represented by the NFS storage component must support the Linux file naming scheme. When using a non-Linux OS, you need to reconfigure mapping on the NFS storage device.

On Linux systems, you specify the shares to be exported in the /etc/exports file. To ensure that HCP correctly uses the NFS volumes that you make available to it, the specification of each exported share must minimally include these options:

rw,sync,no_wdelay

For example, to export the share named /hcp_shares/share1 to the HCP system with the domain name hcp.example.com, you would add this line to the /etc/exports file:

/hcp_shares/share1 admin.hcp.example.com(rw,sync,no_wdelay)

The export options in each line in the /etc/exports file must directly follow the system identifier with no space between them.

Once you’ve specified the shares to exported, you use this command to export them:

exportfs -a

For information on how to export shares on non-Linux storage devices, see the device-specific documentation.

Required NFS storage component configuration settings

To enable HCP to access the storage that’s represented by an NFS storage component, when you create that component, you specify the following information:

The IP address or hostname that HCP needs to use to connect to the physical storage device on which you want to access storage volumes using NFS mount points

The mount command options that you want HCP to use when it creates NFS mount points to access NFS shares on the device that’s represented by the component

To ensure that NFS volumes are mounted correctly, HCP always uses these options to the mount command:

rw,sync,soft,nodev,nfsvers=3

HCP uses the options that you specify in addition to the above options. The additional options that you can specify are:

lookupcache=none
noatime
nodiratime
nosuid
port=n
retrans=n
rsize=n
tcp
proto=tcp6
timeo=n
wsize=n

Other mount command options are not supported.

NoteWebHelp.png

Note: If the [hcp_system] network is currently configured to use both IPv4 and IPv6 addresses, you need to specify tcp or proto=tcp6 to indicate which type of IP address you want HCP to use to connect to the NFS storage component.

The full pathname of each directory that you want to access using an NFS mount point

NoteWebHelp.png

Notes: 

At any given time, a mount point can be associated with only one NFS storage component.

By default, the Add Component wizard displays a list of the existing mount points that HCP is able to access using the specified user account credentials, but the wizard does not display the controls required to specify the pathname for an existing NFS share. To specify a directory that does not appear in the list, you need to click on Mount Point Actions, then specify the full pathname of the directory for which you want to create an NFS mount point.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

About storage pools


HCP uses storage pools to represent logical groups of storage components that can be used as storage tiers. Each storage pool consists of one or more storage components that are used to access the same type of storage.

Each storage tier typically consists of only one storage pool, but a tier can be configured to use multiple storage pools. To store objects on a given tier, HCP uses all of the storage that’s accessed using the storage components that are contained in the storage pools that are configured for the storage tier. Therefore, the capacity of a given storage pool is the total amount of space that’s associated with all the physical storage devices or all of the cloud storage service endpoints represented by the storage components in the pool. You can add storage components to a pool at any time, thereby increasing the capacity of the pool.

You should size extended storage pools to accommodate the amount of data you expect to be written to them. In making this calculation, you need to account for multiple namespaces using the same service plan as well as for multiple service plans specifying the same target storage pool.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Primary storage pools


Every HCP system has a pre-configured storage pool for each type of primary storage that the system is configured to use. The primary running storage pool contains the pre-configured primary running storage component, which represents all continuously spinning disks that are currently managed by the HCP nodes. For SAIN systems with spindown storage, the primary spindown storage pool contains the pre-configured primary spindown storage component, which represents all spindown-capable disks that are currently managed by the HCP nodes.

In the System Management Console, you can use the Storage page to view information about the current hardware configuration, health status, availability, capacity, and usage of the storage that’s represented by each primary storage pool.

You can also use the Storage page to view current and historical storage usage statistics for each individual primary storage pool, and you can view a comparison of the current and historical storage capacity usage statistics for all storage pools that are defined on the HCP system, including the two primary storage pools.

You can use the information that you can view on the Storage page to monitor the health and usage of each type of primary storage and determine when you need to add primary storage to an HCP system or replace storage devices that are used for primary running storage or primary spindown storage.

NoteWebHelp.png

Note:  You cannot modify a primary storage pool in order to add, retire, or upgrade primary running storage or primary spindown storage.

To add primary storage to an HCP system either to increase primary storage capacity or to replace one or more retired storage devices, contact your authorized HCP service provider for help.

On a RAIN or SAIN system, you can use either the Storage page or the Migration page in the System Management Console to retire one or more primary storage devices. When you retire primary storage, HCP automatically updates each primary storage pool as necessary to reflect the changes in the total storage capacity and in the total number of disks represented by each primary storage pool.

For more information on retiring primary storage using the Storage page, see Retiring primary storage devices. For more information on retiring primary storage using the Migration page, see Migration service.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

S Series storage pools


HCP S Series Nodes use storage pools to group buckets together. A storage pool can contain multiple buckets from different S Series Nodes, but a bucket cannot belong to multiple storage pools.

If you add an S Series Node to the HCP system, a storage pool needs to already exist or be created. To create a storage pool you must specify the following information:

The storage pool name.

Whether you want HCP to compress object data that’s stored on the storage that’s allocated to the buckets in the storage pool.

Whether you want HCP to encrypt object data that’s stored on the storage that’s allocated to the buckets in the storage pool. If encryption is disabled for the system, this option is not visible.

For each bucket you want to include in the storage pool:

oThe account to which the bucket is assigned

oThe name of the bucket

NoteWebHelp.png

Notes: 

At any given time, a bucket can be included in only one storage pool.

Each bucket you add to a new storage pool must be empty or have only HCP data in it.

A storage pool is compliant only if all of the buckets in the pool are associated with compliant storage components.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Extended storage pools


HCP supports the use of six different types of extended storage. To enable HCP to use a specific type of extended storage, you need to create and configure one or more storage pools of the applicable type.

For each storage pool you create, you need to specify the name of the pool, the type of extended storage that’s represented by the pool, and the storage components that are contained in the pool.

The next sections describe each type of extended storage pool and describe the information you need to specify to enable HCP access the storage that’s represented by each type of storage pool.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Amazon S3 storage pools


Each Amazon S3 storage pool contains one or more buckets that are associated with specific Amazon S3 storage components. Each Amazon S3 storage pool includes all of the storage that’s allocated to all of the buckets in the pool.

To enable HCP to access the storage that’s represented by an Amazon S3 storage pool, when you create that component, you specify the following information:

The storage pool name.

Optionally, a description of the pool.

Whether you want HCP to compress object data that’s stored on the storage that’s allocated to the buckets in the storage pool.

Whether you want HCP to encrypt object data that’s stored on the storage that’s allocated to the buckets in the storage pool. If encryption is disabled for the system, this option is not visible.

For each bucket you want to include in the storage pool:

oThe name of the Amazon S3 storage component that represents the Amazon S3 Web Services endpoint that’s used to access the bucket

oThe account label used to identify the Amazon S3 Web Services user account that’s used to access the storage associated with the bucket

oThe name of the bucket

NoteWebHelp.png

Notes: 

At any given time, a bucket can be included in only one storage pool.

Each bucket you add to a new Amazon S3 storage pool must be empty or have only HCP data in it.

A storage pool is compliant only if all of the buckets in the pool are associated with compliant Amazon S3 storage components.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Google Cloud storage pools


Each Google Cloud storage pool contains one or more buckets that are associated with specific Google Cloud storage components. Each Google Cloud storage pool includes all of the storage that’s allocated to all of the buckets in the pool.

To enable HCP to access the storage that’s represented by a Google Cloud storage pool, when you create that component, you specify the following information:

The storage pool name.

Optionally, a description of the pool.

Whether you want HCP to compress object data that’s stored on the storage that’s allocated to the buckets in the storage pool.

Whether you want HCP to encrypt object data that’s stored on the storage that’s allocated to the buckets in the storage pool. If encryption is disabled for the system, this option is not visible.

For each bucket you want to include in the storage pool:

oThe name of the Google Cloud storage component that represents the Google Cloud Platform endpoint that’s used to access the bucket

oThe account label used to identify the Google Cloud Platform user account that’s used to access the storage associated with the bucket

oThe name of the bucket

NoteWebHelp.png

Notes: 

At any given time, a bucket can be included in only one storage pool.

Each bucket you add to a new Google Cloud storage pool must be empty or have only HCP data in it

A storage pool is compliant only if all of the buckets in the pool are associated with compliant Google Cloud storage components.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

Microsoft Azure storage pools


Each Microsoft Azure storage pool contains one or more containers that are associated with specific Microsoft Azure storage components. Each Microsoft Azure storage pool includes all of the storage that’s allocated to all of the containers in the pool.

To enable HCP to access the storage that’s represented by a Microsoft Azure storage pool, when you create that component, you specify the following information:

The storage pool name.

Optionally, a description of the pool.

Whether you want HCP to compress object data that’s stored on the storage that’s allocated to the containers in the storage pool.

Whether you want HCP to encrypt object data that’s stored on the storage that’s allocated to the containers in the storage pool. If encryption is disabled for the system, this option is not visible.

For each container you want to include in the storage pool:

oThe name of the Microsoft Azure storage component that represents the Microsoft Azure endpoint that’s used to access the container

oThe account label used to identify the Microsoft Azure user account that’s used to access the storage associated with the container

oThe name of the container

NoteWebHelp.png

Notes: 

At any given time, a container can be included in only one storage pool.

Each container you add to a new Microsoft Azure storage pool must be empty or have only HCP data in it.

A storage pool is compliant only if all of the containers in the pool are associated with compliant Microsoft Azure storage components.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

S3 compatible storage pools


Each S3 compatible storage pool contains one or more buckets that are associated with specific S3 compatible storage components. Each S3 compatible storage pool includes all of the storage that’s allocated to all of the buckets in the pool.

To enable HCP to access the storage that’s represented by an S3 compatible storage pool, when you create that component, you specify the following information:

The storage pool name.

Optionally, a description of the pool.

Whether you want HCP to compress object data that’s stored on the storage that’s allocated to the buckets in the storage pool.

Whether you want HCP to encrypt object data that’s stored on the storage that’s allocated to the buckets in the storage pool. If encryption is disabled for the system, this option is not visible.

For each bucket you want to include in the storage pool:

oThe name of the S3 compatible storage component that represents the endpoint that’s used to access the bucket

oThe account label used to identify the user account that’s used to access the storage associated with the bucket

oThe name of the bucket

NoteWebHelp.png

Notes: 

At any given time, a bucket can be included in only one storage pool.

Each bucket you add to a new S3 compatible storage pool must be empty or have only HCP data in it.

A storage pool is compliant only if all of the buckets in the pool are associated with compliant S3 compatible storage components.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.

NFS storage pools


Each NFS storage pool contains one or more mount points that are associated with specific NFS storage components. Each NFS storage pool includes all of the storage that’s accessed using the NFS mount points included in the pool.

To enable HCP to access the storage that’s represented by an NFS storage pool, when you create that component, you specify the following information:

The storage pool name.

Optionally, a description of the pool.

Whether you want HCP to compress object data that’s stored on the storage that’s accessed using the NFS mount points in the storage pool.

Whether you want HCP to encrypt object data that’s stored on the storage that’s accessed using the NFS mount points in the storage pool. If encryption is disabled for the system, this option is not visible.

For each NFS mount point you want to include in the storage pool:

oThe name of the NFS storage component that represents the physical storage device that’s accessed using the NFS mount point

oThe full pathname of the directory that you want to access using the NFS mount point

NoteWebHelp.png

Notes: 

At any given time, an NFS mount point can be included in only one storage pool.

An NFS storage pool is compliant only if all of the NFS mount points in the pool are associated with compliant NFS storage components.

When you add an NFS mount point to a new or existing NFS storage pool, HCP creates that mount point and mounts the applicable storage volume (called an NFS volume or an external storage volume) on a node in the HCP system. HCP then adds that NFS volume to the NFS storage pool.

HCP uses a round-robin algorithm to determine which node to associate with each new NFS volume that’s added to an NFS storage pool. This method of assigning NFS volumes to the nodes in the HCP system ensures that the volumes are distributed evenly among the nodes.

If the node with which an NFS storage volume is associated becomes unavailable, that volume also becomes unavailable. HCP does not reassign the volume to a different node. When the node returns to service, the volume becomes available again.

In the HCP System Management Console, you can use the Hardware and Storage Node pages to view information about the NFS storage volumes (called external storage volumes) that are associated with each node in the HCP system. For information on these pages, see Hardware administration.

Considerations for using NFS volumes

These considerations apply to using NFS volumes with HCP:

HCP can use multiple NFS shares from a single device that’s represented by an NFS storage component. Keep in mind, however, that the larger the number of shares HCP uses, the greater the I/O load on the device.

Typically, you specify export options for a share according to the standards for your site. However, if HCP is unable to mount the extended storage volume that you created as an NFS share, you may need to change the export options. After changing the export options, you need to export the NFS share again.

For each NFS mount point that’s associated with an NFS storage component, you can specify more mount options than the required ones. You might do this, for example, to set the network block size for read or write requests to the optimal size for the storage device that’s represented by the NFS storage component. However, if HCP is unable to mount the extended storage volume that you created as an NFS share, you may need to change the additional mount options that you specified.

If the share for an NFS volume becomes unavailable (for example, because the extended storage device that’s hosting the share is inaccessible), HCP tries periodically to remount the volume. If, after the share is available again, the remount fails, you can try to manually remount the NFS volume:

1.On the left side of the Storage page, click on Components.

2.In the components list, click on the name of the NFS storage component that’s associated with the NFS volume that you want to remount.

3.At the top of the panel that opens, click on the Mount Points tab.

4.On the Mount Points panel, in the table row that contains the NFS mount point that corresponds to the NFS volume you want to remount, click on the remount control ( RemountControl.png ).

HCP attempts to remount the NFS volume. If the remount fails, contact your authorized HCP service provider for help.

To see which node the NFS volume is associated with, hover over the status icon for the mount point on the Mount Points panel.

You cannot move an NFS volume from one NFS storage pool to another.

You cannot control which NFS volume HCP writes data to within an NFS storage pool.

When HCP creates a mount point for a specific NFS volume, HCP stores a file named .__hcp_uuid__ in the shared directory on the device that’s represented by the NFS storage component associated with that mount point. This file uniquely associates the NFS shared directory with the NFS volume. As a result:

oHCP creates only one NFS storage volume for any given exported share.

oIf you delete an NFS mount point from an NFS storage component, the associated exported share cannot be reused as is. This means that any data remaining in the NFS volume associated with the mount point becomes permanently inaccessible to HCP.

For more information on deleting NFS mount points, see Deleting access points from an extended storage pool.

oTo reuse an exported share after the associated NFS mount point is deleted from HCP, you first need to delete any remaining files from the shared directory, including the .__hcp_uuid__ file.

oIf you inadvertently delete the .__hcp_uuid__ file from an NFS shared directory that contains other HCP data, HCP can no longer use the exported share. Contact your authorized HCP service provider for help in recreating the file.

oWhen you back up an NFS shared directory that’s associated with an NFS volume, you need to ensure that the .__hcp_uuid__ file is included in the backup operation. This ensures that the file still exists in the directory after a restore operation.

A situation can occur in which HCP can access an exported share but cannot mount the associated NFS volume. In this case, if the .__hcp_uuid__ file is the only file in the shared directory on the extended storage device on which the data in the NFS volume is stored, you can reuse the exported share. To do this:

1.Delete the mount point that’s associated with the NFS volume from the NFS storage component that represents the device on which the NFS volume is stored.

2.Delete the .__hcp_uuid__ file from the shared directory.

3.Create a new NFS mount point for the share on the same NFS storage component from which you deleted the mount point in step 1.

If an NFS volume becomes inaccessible due to a disk failure on the extended storage device on which the data in the NFS volume is stored, you need to replace the disk, restore the data from backup, and then export the NFS share again.

In this case, the NFS volume needs to be remounted. If HCP doesn’t remount the volume automatically, you can try to manually remount the NFS volume:

1.On the left side of the Storage page, click on Components.

2.In the components list, click on the name of the NFS storage component that’s associated with the NFS volume that you want to remount.

3.At the top of the panel that opens, click on the Mount Points tab.

4.On the Mount Points panel, in the table row that contains the NFS mount point that corresponds to the NFS volume you want to remount, click on the remount control ( RemountControl.png ).

If the manual remount fails, try restarting the node with which the NFS volume is associated. To see which node the NFS volume is associated with, hover over the status icon for the mount point on the Mount Points panel.

You can restore an NFS shared directory to a different location from where it was originally. If you do this, you need to modify the configuration of the associated NFS mount point to point to the new location. For information on modifying an NFS mount point, see Modifying an extended storage component.

If HCP cannot create, mount, or use an NFS volume and you’ve already determined that the permissions for the shared directory, the export options for the share, the mount point configuration on the associated NFS storage component in HCP, and the mount options for the mount point are all correct, the problem may exist on the extended storage device that’s represented by the NFS storage component on which you configured the mount point. To resolve such problems:

oEnsure that the NFS share has been exported on the device.

oEnsure that the NFS server is running on the device.

oEnsure that any NFS security software on the device is not blocking access by any of the HCP nodes.

oCheck the system log file on the device for messages indicating device errors. Then correct those errors.

If HCP still cannot create, mount, or use the volume, contact your authorized HCP service provider for help.

© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.