Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Storage management components

The storage server architecture includes system drives, storage pools, file systems, and virtual servers (EVSs), supplemented by a flexible quota management system for managing utilization, and Data Migrator to Cloud (DM2C) along with classic Data Migrator, both of which optimize available storage by moving cold data to less expensive storage. This section describes each of these storage components and functions in detail.

System drives

System drives (SDs) are the basic logical storage element used by the server. Storage systems use RAID controllers to aggregate multiple physical disks into SDs (also known as LUs). An SD is a logical unit made up of a group of physical disks or flash drives. The size of the SD depends on factors such as the RAID level, the number of drives, and their capacity.

Hitachi Enterprise RAID storage systems have a limit of 3 TiB for standard LUs or 4 TiB for virtualized LUs (HUVM). When using legacy storage systems, it is a common practice for system administrators to build large RAID groups (often called parity groups or volume groups) and then divide them into SDs (LUs) of 2 TiB or less. With today's large physical disks, RAID groups must be considerably larger than 2 TiB to make efficient use of space.

When you create SDs:

  • Use the Hitachi storage management application appropriate for your storage system. You cannot create SDs using NAS Manager or the NAS server command line.
  • You may need to specify storage system-specific settings in the storage management application.
For more information about the settings required and the firmware that is installed for each type of storage system, contact customer support.

Creating system drives

When creating SDs, use the Hitachi storage management application appropriate for your storage subsystem. You cannot create SDs using NAS Manager or the NAS server command line.

When creating SDs, you may need to specify array-specific settings in the storage management application. Also, depending on the firmware version of the array, there may be device-specific configuration settings. For example, on HUS 110, HUS 130, and HUS 150 arrays, if the firmware is base 0935A or greater, you should enable the HNAS Option Mode on the Options tab of the Edit Host Groups page.

For more information about what settings are required for each type of array, and for the firmware installed on the array, contact customer support.

Storage pools

A NAS server storage pool (known as a "span" in the command line interface) is the logical container for a collection of four or more system drives (SDs). There are two types of NAS server storage pools:

  • An untiered storage pool is made up of system drives (SDs) created on one or more storage systems within the same tier of storage (storage systems with comparable performance characteristics). To create an untiered storage pool, there must be at least four available and unused system drives on the storage system from which the SDs in the storage pool are taken.
  • A tiered storage pool is made up of system drives (SDs) created on storage systems with different performance characteristics. Typically, a tiered storage pool is made up of SDs from high-performance storage such as flash memory, and SDs from lower-performance storage such as SAS (preferably) or NL SAS (near line SAS). You can, however, create a tiered storage pool from SDs on storage systems using any storage technology, and you can create both tiers on the same storage system.

NAS server storage pools:

  • Can be expanded as additional SDs are created in the storage system, and a storage pool can grow to a maximum of 1 PiB or 256 SDs. Expanding a NAS server storage pool does not interrupt network client access to storage resources. SDs may be based on parity groups, or on HDP DP-Vols (preferably).
  • Support two types of thin provisioning:
    • NAS server storage pools can be thinly provisioned when created using SDs based on HDP DP-Vols.
    • File system thin provisioning, through the use of the NAS server filesystem-thin command and file system confinement and auto-expansion. This type of thin provisioning allows you to create a small file system, which can then automatically expand when necessary, and that ability saves the overhead associated with sustaining unnecessary storage.

      When file system thin provisioning is enabled, the server reports to protocol clients (though not at the CLI or in the GUI) that the file system is larger than it really is: either the capacity to which you have confined it or the maximum capacity to which it can ever grow, whichever is smaller.

      Refer to the Command Line Reference for more information on the filesystem-thin command.

  • Contain a single stripeset on initial creation. Each time the storage pool is expanded, another stripeset is added, up to a maximum of 64 stripesets (after creation, a storage pool can be expanded a maximum of 63 times). As HDP is the preferred method to provision DP Vols to the NAS, Hitachi recommends thin provisioning the pool 200 - 300 percent, which lessens the likelihood of stripset expansions.
  • Contain the file systems and enable the user to manage the file system settings that are common to all file systems in the storage pool. For example, the settings applied to a storage pool can either allow or constrain the expansion of all file systems in the storage pool.
    NoteBy default, there is a limit of 32 file systems per storage pool. Recently deleted file systems that are still in the recycle bin do not count towards this number. It is possible to increase this limit using the filesystem-create CLI command with the --exceed-safe-count option. See the command man page for details.

Tiered storage pools

Currently, a tiered storage pool must have two tiers:

  • Tier 0 is used for metadata, and the best-performing storage should be designated as Tier 0.
  • Tier 1 is used for user data.

When creating a tiered storage pool, at least four unused SDs must be available for each tier. When you create a tiered storage pool, you first create the user data tier (Tier 1), then you create the metadata tier (Tier 0).

During normal operation, one tier of a tiered storage pool might become filled before the other tier. In such a case, you can expand one tier of the storage pool without expanding the other tier. When expanding a tier, you must:

  • Make sure that the SDs being added to the tier have the same performance characteristics as the SDs already in the tier. For example, do not add NL SAS (near line SAS) based SDs to a tier already made up of flash drives.
  • Add SDs to the tier. See the span-create man page for more information about minimum SD counts and creating storage pools.

Dynamically provisioned volumes

A dynamically provisioned volume (DP-Vol) is a virtualized logical unit (LU) that is used with Hitachi Dynamic Provisioning (HDP). You create DP-Vols in a dynamically provisioned pool (a DP pool), which is an expandable collection of physical storage. The maximum capacity of an SD is 64TiB.

The total capacity of a DP-Vol can exceed that of the underlying parity groups or pool volumes (called thin provisioning).  Every DP-Vol can draw space from any of the underlying parity groups or pool volumes, so the system performs well even if the load on the SDs is unbalanced.

Hitachi Dynamic Provisioning (HDP) thin provisioning enables granular span-expansion without loss of performance, which is impossible without dynamic provisioning. The NAS server is aware of thin provisioning, and does not use more space than actually exists, making thin provisioning safe with a NAS server. With this server, unlike other server platforms, it is not necessary to expand a thickly provisioned HDP pool as soon as it becomes 70 percent full.

Dynamically provisioned pools

A dynamically provisioned pool (DP pool) is an expandable collection of parity groups or pool volumes containing the dynamically provisioned volume (DP-Vols). DP pools are also sometimes referred to as HDP pools.

On enterprise storage, a DP pool resides on the pool volumes. On modular storage, a DP pool resides on the parity groups (PGs), rather than on logical units (LUs).

NoteReal (non-virtual) LUs are referred to as pool volumes in enterprise storage. In modular storage, real LUs are referred to as parity groups.

File system types

A file system typically consists of files and directories. Data about the files and directories (as well as many other attributes) is the metadata. The data within the file system (both user data and metadata) is stored in a storage pool.

Like storage pools, file system data (metadata and user data) may be stored in a single tier, or in multiple tiers.

  • When file system metadata and user data are stored on storage systems of a single storage tier, the file system is called an untiered file system. An untiered file system must be created in an untiered storage pool, it cannot be created in a tiered storage pool.

  • When file system metadata and user data are stored on storage systems of different storage tiers, the file system is called a tiered file system.

    In a tiered file system, metadata is stored on the highest performance tier of storage, and user data is stored on a lower-performance tier. Storing metadata on the higher-performance tier provides performance and cost benefits.

    A tiered file system must be created in a tiered storage pool; it cannot be created in an untiered storage pool.

Fibre Channel connections

Note
  • The number and operational speed of Fibre Channel (FC) ports on a NAS server are dependent on the server model. Refer to the hardware manual for your server model for more information on the number, operational speed, and location of FC ports on your NAS server.
  • The ports on the HNAS 5000 series are in reverse order from prior gateways.
    • Left to right: FC 4-3-2-1
    • Earlier generations were FC 1-2-3-4
Hitachi NAS Platform servers

Each HNAS server supports up to four independently configurable FC ports. Independent configuration allows you to connect to a range of storage systems, which allows you to choose the configuration that best meets the application requirements. The server manages all back-end storage as a single system, through an integrated network management interface.

Hitachi NAS Platform server model Supported FC port operational speeds
3080, 3090, 3100, and 4040 1, 2, or 4 Gbps
4060, 4080, and 4100 2, 4, or 8 Gbps
5200, 53004, 8, or 16 Gbps

The server supports connecting to storage systems either through direct-attached FC connections to the storage system (also called DAS connections) or FC switches connected to the storage system (also called SAN configurations):

  • In direct-attached (DAS) configurations, you can connect up to two storage systems directly to a server or a two-node cluster. Clusters of more than two nodes must use a FC switch configuration.
  • In configurations using FC switches (SAN configurations), the server must be configured for N_Port operation (when not using the 5000 series).
    • N_Port is the only operational mode on the 5000 series.
    • Contact customer support for more information on supported FC switch interoperability.

You can manage the FC interface on the server/cluster through the command line interface (CLI), using the following commands:

  • fc-link to enable or disable the FC link.
  • fc-link-type to change the FC link type. (This command is not available on the 5000 series because only N_Port is supported).
  • fc-link-speed to change the FC interface speed. (Only the HNAS 5000 series supports Auto Negotiation, which is the default setting.)

For more information about these commands, refer to the Command Line Reference.

VSP F/G/Nx00 servers with NAS modules

Depending on the model, your NAS module server may contain 8 Gbps and/or 16 Gbps FC ports for the block connectivity. NAS module connectivity is by FC protocol over PCIe. Refer to the Hardware Reference Guide for your VSP Gx00 or Fx00 model server for more information about the FC ports for block connectivity, and contact Hitachi Vantara Customer Support for information about using the ports.

About FC paths

The NAS server accesses the storage system through a minimum of two Fibre Channel (FC) paths (at least one from each of the FC switches). Unless otherwise stated, the recommended number of paths per SD should be limited to 16 (2 HNAS host FC ports per switch x 2 switches x 4 target ports per switch). An FC path is made up of the server’s host port ID, the storage system port WWN (worldwide name), and the SD identifier (ID).

The following illustration shows a complete path from the server to each of the SDs on the storage system:

FC interface

You can display information about the FC paths on the server/cluster through the command line interface (CLI), using the fc-host-port-load, fc-target-port-load, and the sdpath commands.

Fibre channel interface

The ports are in reverse order from prior gateways (due to mounting the HBA within the chassis):

  • Left to right: FC 4-3-2-1 (earlier generations were FC 1-2-3-4)
  • 2-node maximum for DAS storage (which is the same for prior gateways)
FC interface ports
FC ports

Load balancing and failure recovery

Load balancing on a storage server is a matter of balancing the loads to the system drives (SDs) on the storage systems to which the storage server is connected. A logical unit (LU), known to the server as an SD, is a piece of disk space managed by the block storage, spread across several physical disks.

The server routes FC traffic to individual SDs over a single FC path, distributing the load across two FC switches and, when possible, across dual active/active or multi-port RAID controllers.

Following the failure of a preferred path, disk I/O is redistributed among other (non-optimal) paths. When the server detects reactivation of the preferred FC path, it once again redistributes disk I/O to use the preferred FC path.

Default load balancing (load balancing automatically performed by the storage server) is performed based on the following criteria:

  • “Load” is defined as the number of open SDs, regardless of the level of I/O on each SD. An SD is open if it is in a span that has a file system that is mounted or is being formatted, checked or fixed. SDs count towards load at the target if they are open on at least one cluster node; the number of nodes (normally all nodes in a cluster, after boot) is not considered.
  • Balancing load on RAID controller target ports takes precedence over balancing load on server FC host ports.
  • Balancing load among a system’s RAID controllers takes precedence over balancing among ports on those controllers.
  • In a cluster, choice of RAID controller target port is coordinated between cluster nodes, so that I/O requests for a given SD do not simultaneously go to multiple target ports on the same RAID controller.

You can manually configure load distribution from the CLI (overriding the default load balancing performed by the server), using the sdpath command. When manually configuring load balancing using the using the sdpath command:

  • You can configure a preferred server host port and/or a RAID controller target port for an SD. If both are set, the RAID controller target port preference takes precedence over the server host port preference. When a specified port preference cannot be satisfied, port selection falls back to automatic selection.
  • For the SDs visible on the same target port of a RAID controller, you should either set a preferred RAID controller target port for all SDs or for none of the SDs. Setting the preferred RAID controller target port for only some of the SDs visible on any given RAID controller target port may create a situation where load distribution is suboptimal.
Note

Manually setting a preferred path is not necessary or recommended.

The sdpath command can also be used to query the current FC path being used to communicate with each SD. For more information on the sdpath command, enter man sdpath command.

To see information about the preferred path, navigate to Home Storage Management System Drives, then select the SD and click details to display the System Drive Details page. If available, the FC Path section provides information about the path, port, and controller.

Fibre Channel statistics

The server provides per-port and overall statistics, in real time, at 10-second intervals. Historical statistics cover the period since previous server start or statistics reset. The Fibre Channel Statistics page of the NAS Manager displays the number of bytes/second received and transmitted during the past few minutes.

RAID controllers (HNAS Gateway only)

The RAID controllers operate as an Active/Active (A/A) pair within the same storage system. Both RAID controllers can actively process disk I/O requests. If one of the two RAID controllers fails, the storage server reroutes the I/O transparently to the other controller, which starts processing disk I/O requests for both controllers.

 

  • Was this article helpful?