Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Using HDP storage

When working with HNAS systems, the HDP software supports up to two levels of tiered storage (Tier 0 and Tier 1).

See the Hitachi NAS Platform HDP Best Practices (MK-92HNAS063) for recommendations.

Considerations when using HDP pools

Consider the following when using the HDP pools:

As with storage pools based on parity groups:

  • Deleting a file system is not always permanent. Sometimes file systems are recoverable from the recycle bin by issuing the filesystem-undelete command.
  • Recycling a file system is permanent.

Unlike storage pools based on parity groups, on HDP-based storage pools:

  • Freed chunks move to the vacated-chunks list, which is stored in Cod.
  • Vacated chunks are reused when you create or expand file systems in the same storage pool.
  • By reusing the same chunks, the server avoids exhausting space prematurely. Reusing chunks from recycled file systems allows the server to avoid wasting real disk space on deleted data. Instead, the server reallocates chunks from recycled file systems to other file systems.

Creating an HDP pool with untiered storage

Create the HDP pool and DP volumes for NAS server.

With untiered storage, tiers are not used. The metadata and the data reside on the same tier. The server has no real perception of tiering with untiered storage.

Creating HDP pools with tiered storage

Most storage pools reside on a single DP pool.

ImportantThe HNAS systems support up to two levels of tiered storage (Tier 0 and Tier 1).

With tiered storage, the metadata is kept on Tier 0 and the data on Tier 1. Tier 0 should be smaller than Tier 1, but should consist of faster storage. In general, the amount of real storage allocated for Tier 0 should be about 10% the size of Tier 1. The metadata Tier 0 contains is more compact than user data but is accessed more often. A single DP pool can support both the metadata and data tiers.

Storage pool naming

Storage pool naming rules and conventions.

A storage pool label may be one of the following types:

  • A base name, which is the name that the storage pool or file system was given when it was created. All copies of a storage pool have the same base name. All copies of a file system have the same base name unless you decide to rename one or more copies. Storage pool labels are not case sensitive, but they do preserve case (labels will be kept as entered, in any combination of upper and lower case characters). Also, storage pool labels may not contain spaces or any of the following special characters:
    • Double quote (")
    • Single quote (')
    • Ampersand (&)
    • Asterisk (*)
    • Slash (/)
    • Backslash (\)
    • Colon (:)
    • Semicolon (;)
    • Greater than (>)
    • Less than (<)
    • Question mark (?)
    • Vertical bar/pipe (|)

    Guidelines for choosing a good storage pool label include:

    • The label should reflect the contents of the storage pool. Reasonably short, but distinctive and descriptive labels will help to guard against mistakes.
    • Storage pool labels should be unique across the entire site, not just on the local cluster. If you move storage between servers or clusters, duplicate names will cause needless difficulty. Also, generic labels such as 'SAS_POOL_0' are best avoided, because they are not mnemonic and they are more likely to be duplicated among the clusters at a site. See the man page or the Command Line Reference for the storage- based-snapshot-label command for an explanation of rules and the various name types and interactions.
    • A storage pool label should not be the same as a file system label.
    • The storage pool label should not resemble a device ID (it should not be just a sequence of 1-4 digits).
  • A snapshot name (or snap name), which identifies a single copy of the data. When you place a storage pool into snapshot mode, and every time you add a new snapshot, you specify a new snap name. Every snapshot of a given storage pool must have a unique snap name, although snapshots of different storage pools may have the same snapshot name.
    NoteA snapshot name follows the same rules for special characters as a storage pool or filesystem base name, but snap name also cannot contain a dash (-).
  • An instance name, which is automatically constructed from the base name and the snapshot name, if snapshots are used. The instance name identifies a single copy of a storage pool or file system.

    For example, if a storage pool labelled 'Accounts' has a snapshot called 'Wednesday', the instance name of this snapshot is 'Accounts-Wednesday'. If the storage pool has a file system called 'External' then the instance name of this snapshot of the filesystem is 'External-Wednesday'. Most storage pool commands and file system commands expect instance names.

NoteWhen the server creates or loads a file system, file system details are stored in the server's registry, where they can be displayed by the filesystem-list-stored command. The server compares these labels as follows:
  • Case-insensitively: a storage pool called 'AAA' cannot be loaded at the same time as a storage pool or file system called 'aaa'.
  • Objects of the same type. For example, you cannot have two file systems with the same name, even if they are in different storage pools.
  • Objects of different types. For example, if you have snapshotted storage pools with instance names 'Accounts-Main' and 'Accounts-DataMine', you cannot then create a new unsnapshotted storage pool or a file system with a label of Accounts'.

When labelling a storage pool, no storage pool may have the same "base name" or "instance name" as the base or instance name of any other loaded storage pool or file system.

Creating a storage pool using the CLI

You can use the CLI to create storage pools.

NoteFor detailed information about the span-create command, see the CLI man pages. To create smaller filesystems, use the CLI instead of the GUI as it enables the use of smaller chunks.

Procedure

  1. On the HNAS system, use the span-create command to create a storage pool using the SDs from the DP-Vols (on storage). For more information about the span-create command, refer to the Command Line Reference.

    Note

    If you are using HDP:

    • To avoid server timeouts when creating a new NAS server storage pool, wait for the HDP pool to finish formatting before creating the NAS server storage pool.

      If the HDP pool has finished formatting, but the NAS server does not detect the new DP-Vols, run the scsi-refresh command so the NAS server will detect the new DP-Vols.

    • If you are using HDP thin provisioning, list all the SDs in the initial span-create command. Do not run a single span-create command, then a series of span-expand commands.

Creating a storage pool using the GUI

With available SDs, administrators can create a storage pool at any time. After being created, a storage pool can be expanded until it contains up to 256 SDs.

When creating a tiered storage pool, to attain optimal performance, make sure that the SDs of the metadata tier (Tier 0) are on the highest performance storage type.

Procedure

  1. Navigate to Home Storage Management Storage Pools, and click create to launch the Storage Pool Wizard.

    Storage Pools
  2. Select either Untiered Storage Pool or Tiered Storage Pool.

    Note If you are creating a tiered storage pool, this will create Tier 1 User-data, of the tiered storage pool).
  3. From the list of available SDs, select the SDs for the storage pool/tier.

    Select at least four SDs for use in building the new storage pool/tier. To select an SD, select the check box next to the ID (Label).

    An untiered storage pool cannot contain SDs on RAID arrays with different disk types or RAID levels. Any attempt to create a storage pool from such dissimilar SDs will be refused.

    A tiered storage pool can contain SDs on RAID arrays with different disk types, as long as they are in different tiers. A tiered storage pool cannot, however, contain SDs with different RAID levels. Any attempt to create a storage pool with SDs that have different RAID levels will be refused.

    For the most efficient use of storage capacity in an untiered storage pool or in a tier of a tiered storage pool, best practice is for all SDs be of the same capacity, width, stripe size, and disk size. However, after first acknowledging a warning prompt, you can create a storage pool with SDs that are not identically configured.

  4. Specify the storage pool label.

  5. Verify your settings, and click next to display a summary page.

    The summary page displays the settings that will be used to create the storage pool/tier.

    If you have already set up mirrored SDs for disaster preparedness or replication purposes, and you want the server to be aware of the mirror relationship, select the Look For Synchronously Mirrored System Drives check box.

    NoteBefore selecting the Look For Synchronously Mirrored System Drives check box, you must have finished configuring the mirrored SDs using the RAID tools appropriate for the array hosting the mirrored SDs. For example, for Hitachi storage arrays, you would use True Copy to create the mirrored SDs.

    NoteThe Look For Synchronously Mirrored System Drives check box is used only when setting up mirrored SD relationships using Hitachi TrueCopy. The Look For Synchronously Mirrored System Drives check box is not used with Hitachi Universal Replicator (HUR) or global-active device (GAD) software.
  6. After you have reviewed the information, click create to create the storage pool/tier.

  7. If you are creating an untiered storage pool, you can now either:

    • Click yes to create file systems (refer to the File Services Administration Guide for information on creating file systems).
    • Click no to return to the Storage Pools page without creating file systems.
  8. If you are creating a tiered storage pool, you can now either:

    • Click no to return to the Storage Pools page if you do not want to create the metadata tier (Tier 0) of a tiered storage pool.
    • Click yes to display the next page of the wizard, which you use to create the metadata tier (Tier 0) of a tiered storage pool.
      1. Specify which SDs to use in the tier by selecting the check box next to the SD label.
      2. Click next to display the next page of the wizard, which is a summary page.
      3. If you have mirrored SDs, for disaster preparedness or replication purposes, and you want the server to be aware of the mirror relationship, select the Look For Synchronously Mirrored System Drives check box.
      4. After you have reviewed the information, click add to create the metadata (Tier 0) tier of the storage pool. A confirmation dialog appears, and you can now choose to create file systems in the storage pool, or you can return to the Storage Pools page.
        • Click yes to create file systems (refer to the File Services Administration Guide for information on creating file systems).
        • Click no to return to the Storage Pools page.
    NoteAfter the storage pool has been created, it can be filled with file systems. For more information, see the File Services Administration Guide.

Adding the metadata tier

If you created a tiered storage pool, but only defined the SDs for the user data tier (Tier 1), you must now create the metadata tier (Tier 0).
NoteYou can convert an untiered storage pool to a tiered storage pool using the span-tier command. For more information about this command, refer to the Command Line Reference.

To add a tier to a storage pool:

Procedure

  1. Navigate to Home Storage Management Storage Pools.

    Storage Pools
  2. Select the storage pool to which you want to add the tier. Click details to display the Storage Pool Details page.

  3. Click the Add a Tier link to display the Storage Pool Wizard page.

  4. Select the SDs to make up the metadata tier.

    Using the Storage Pool Wizard page, select the SDs for the tier from the list of available SDs on the page. To select an SD for the tier, select the check box next to the SD ID Label in the first column. Verify your settings, and then click next to display a summary page.
  5. Review and apply settings.

    The summary page displays the settings that will be used to create the storage pool/tier.

    If you have already created mirrored SDs for disaster preparedness or replication purposes, and you want the server to be aware of the mirror relationship, select the Look For Mirrored System Drives check box.

    NoteBefore selecting the Look For Mirrored System Drives check box, you must have finished configuring the mirrored SDs using the RAID tools appropriate for the array hosting the mirrored SDs. For example, for Hitachi Vantara storage arrays, you would use True Copy to create the mirrored SDs.

    NoteThe Look For Mirrored System Drives check box is used only when setting up mirrored SD relationships using Hitachi TrueCopy. The Look For Mirrored System Drives check box is not used with Hitachi Universal Replicator (HUR) or global-active device (GAD) software.

    Once you have reviewed the information, click add to create the second tier of the storage pool.

    NoteAfter the storage pool has been created, it can be filled with file systems.
  6. Complete the creation of the storage pool or tier.

    After clicking add (in the last step), you will see a confirmation dialog.

    You can now click yes to create a file system, or click no to return to the Storage Pools page. If you click yes to create a file system, the Create File System page will appear.

Allowing access to a storage pool

This procedure allows server access to an existing storage pool, but can also be used when a storage array previously owned by a server is physically relocated to be used by another server. The process restores access to the SDs that belong to the storage pool, and then restores access to the pool itself.

Note Before moving the storage pool from one NAS server or cluster to another, refer to the Command Line Reference for the span-assign-to-cluster command, or view the span-assign-to-cluster man page for information on migrating a storage pool safely.

To allow access to a storage pool:

Procedure

  1. Navigate to Home Storage Management System Drives.

  2. Select one of the SDs belonging to the storage pool, and click Allow Access.

  3. Select a pool, and click details. In the Details page for that storage pool, click Allow Access; then, in the Confirmation page, click OK.

    NoteTo become mountable, each file system in the storage pool must be associated with an EVS. To do this, navigate to the Details page for each file system in the storage pool and assign it to an EVS.

Creating storage pools with DP pools from HDP storage

After you have created an HDP pool with tiered or untiered storage, you can use DP-Vols to create storage pools.

See the CLI man pages for detailed information about commands.

Procedure

  1. Use the command span-create or the NAS Manager equivalent to create the storage pool on the first HDP pool’s DP-Vols.

  2. Use the command span-expand to expand the storage pool on to the second HDP pool’s DP-Vols.

    Expanding the storage pool at the outset avoids the disadvantages of expanding it on a mature span. This is the only recommended exception to the rule of one pool per storage pool and one storage pool per pool.
  3. When necessary, add new pool volumes to whichever pool needs them. Use the following steps:

    1. Add parity groups or pool volumes.

    2. If the amount of storage in the affected pool exceeds the total size of its DP-Vols, add more DP-Vols and use span-expand.

Moving free space between storage pools

You can move free space between storage pools that reside on the same HDP pool; however, you should first thoughtfully consider the implications because of the strong performance impacts.

The span-list -s command shows:

  • The amount of vacated space on each HNAS stripeset,
  • The HDP pool that hosts the stripeset
  • Other spans that share the HDP pool
Use this command to determine how much space can be moved to other spans on the same HDP pool. If you recently deleted a file system and you see less vacated space than you expect, issue the span-list-recycle-bin command to identify any file systems that have recently been deleted and still occupy space. To recycle these file systems, issue the filesystem-recycle command. This command makes it impossible to undelete the file systems you specify on the command line.

The span-unmap-vacated-chunks command launches a background thread that may run for seconds, minutes, hours, days or even months. See the man page for commands to monitor and manage its progress.

The free space on the DP pool will continue increasing while this background thread runs.

In configurations where the storage has to zero-initialize (overwrite with zeros) HDP pages before they can be reused, the free space on the pool may continue to increase even after the unmapping thread terminates.

The performance of all DP pools on the affected array will be lower than usual until free space has finished increasing, but DP pools on other storage arrays will be unaffected.

Important unmapper considerations

Although not recommended, should a situation arise where multiple storage pools (spans) exist on a single pool, you can consider using the unmapper feature to move space between the storage pools on that pool.

ImportantUsing the unmapper commands can have serious consequences. It is strongly recommended that you read the CLI man page for each command.

Considerations:

  • Unmapping vacated chunks does free space, but the performance of the storage will be reduced until the server has zero-initialized all the space that you unmap. Never unmap chunks just to affect the appearance of available storage.
  • You can unmap space on any number of spans at one time.
  • The server has no commands for monitoring or managing the HDP zero-init process. Once the process starts, you have to wait until it finishes. The time can exceed many hours, even weeks in some cases.

Further reasons to avoid using the unmapper:

  • In most storage configurations, an HDP page cannot be reused immediately after being unmapped. For security reasons, the page must first be zero-initialized to overwrite the previous page with zeros. This process occurs inside the storage, and it cannot be monitored or managed by commands on the server.

The unmapper feature uses the following commands:

  • span-vacated-chunks displays the number of vacated chunks in a storage pool and the progress of any unmapper.
  • span-stop-unmapping cancels an unmapper without losing completed work.
  • span-throttle-unmapping helps you avoid long queues of pages waiting to be zero-initialized.

Using the unmapper to move space between storage pools

If, after thoughtfully considering the consequences associated with use of the unmapper, you decide it is worth the significant performance impact, you can use the following steps to move space between storage pools.

Unmapping chunks (using the unmapper) does not increase the amount of free space in a storage pool; instead, the unmapper:

  1. Takes chunks out of the VC list and returns the underlying HDP pages to the DP pool.
  2. Removes the capacity from the chunks listed in the vacated chunks list by unmapping the space.
  3. Zero-initializes the space.
  4. Returns the freed capacity to the underlying DP pool.

After being added to the DP pool, the freed space can be used by other storage pools that reside on the same DP pool.

NoteSee the CLI man pages, or the Command Line Reference for detailed information about the commands mentioned below.

The following procedure describes how to move space from storage pool S to storage pool T. This procedure is based on a configuration where both storage pools are based on DP-Vols from the same DP pool, and that DP pool is thinly provisioned.

Procedure

  1. Delete and recycle a file system from storage pool S (Span S).

  2. Run span-unmap-vacated-chunks on storage pool S.

  3. Run the span-list --sds T command, and look at the amount of free space in the DP pool.

    When the output from the span-list --sds T command shows that the DP pool has enough free space, create a new file system in storage pool T and/or expand one or more file systems in storage pool T.
    NoteIt may take a significant amount of time for the zero-initialization process inside the storage to complete and the amount of space free in the DP pool to increase. The amount of time it takes for this process is dependent on the type of storage, the amount of space being initialized, and utilization of the storage subsystem.
    If it takes too long to add the freed space to the DP pool, you can expand storage pool T onto a different DP pool (one that has available space).