Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Using clusters

NAS Platform can form clusters under the following conditions:

  • The cluster to which a node is being added must have a license for at least the currently existing number of nodes.
  • All nodes in the cluster must have the same hardware configuration. You cannot form a cluster from a variety of hardware models.
  • The node joining the cluster must be of a compatible software level (within one minor revision level). For example, a server running version 11.0 software can be added to a cluster running version 11.1 software, but not to a cluster running version 11.2 software.

After the first server has been set to cluster mode, you can:

  • Add nodes by "joining" servers to the cluster.
  • Add EVSs to the cluster and distribute them among the cluster nodes.
    NoteTo maximize cluster performance, distribute EVSs across nodes to level the network client load among them.

Cluster name space (CNS)

A Cluster Name Space (CNS) allows multiple separate file systems on a server to appear as subdirectories of a single logical file system (that is, as one unified file system). They can also make multiple storage elements on that server available to network clients through a single CIFS share or NFS export.

The root directory and subdirectories in the CNS tree are virtual directories. As in a file system, the root occupies the highest position in the CNS tree and subdirectories reside under the root. Access to these virtual directories is read-only. Only the server's physical file systems support read-write access. Physical file systems can be made accessible under any directory in the CNS tree by creating a file system link. File system links associate the virtual directory in the CNS tree with actual physical file systems.

Any or all of the subdirectories in the CNS can be exported or shared, making them (and the underlying physical file systems) accessible to network clients. Creation and configuration of a CNS can be performed through the NAS Manager or the CLI.

After shared or exported, a CNS becomes accessible through any EVS on its server or cluster. Therefore, it is not necessary to access a file system through the IP address of its host EVS and, in fact, file systems linked into the CNS can be relocated between EVSs on the server or cluster transparently and without requiring the client to update its network configuration. This accessibility can be useful in distributing load across cluster nodes.

The simplest CNS configuration is also the most common. After creating the root directory of the CNS, create a single CIFS share and NFS export on the CNS root; then, add a file system link for each physical file system under the root directory. Through this configuration, all of the server's storage resources are accessible to network clients through a single share or export, and each file system is accessible through its own subdirectory.

Windows and UNIX clients can take full advantage of the storage virtualization provided by CNS, because directories in the virtual name space can be shared and exported directly.

TipFor the best results, FTP mount points and iSCSI logical units (LUs) should be added to file systems that are not part of a CNS, as CNS does not support FTP mount points or iSCSI LUs. Because FTP clients and iSCSI Initiators communicate directly with individual EVSs and their associated file systems, connectivity for any file system containing FTP mount points or iSCSI LUs must be reestablished through a new EVS upon relocation.
NoteCNS is a licensed feature. To create a Cluster Name Space, a CNS license must be installed. To purchase a CNS license, contact customer support.

EVS name spaces

An EVS name space allows separate file systems within a virtual server (EVS) to appear as subdirectories of a single logical file system (that is, as one unified file system). An EVS name space can also make multiple storage elements on the virtual server available to network clients through a single CIFS share or NFS export.

The EVS name space functions in the same way as the cluster name space (CNS), except that its context is that of the EVS instead of the cluster.

To create an EVS name space, you must have installed a CNS license and an EVS Security license, and you must have set the EVS to use an individual security context.

Linking to and from an EVS name space has the following constraints:

  • Links within an EVS name space. In an EVS name space tree, you can add links from the EVS name space to file systems hosted by the same secure EVS.
  • Links between the CNS and the EVS name spaces. The contexts of the Cluster Name Space and the EVS name space are mutually exclusive: links from one to the other are not allowed.
  • Links outside the EVS name space. Links from the individual EVS name space to file systems in other EVSs are not supported.

About cluster licensing

The maximum number of nodes for a cluster is controlled by several factors, including hardware version, software version, and cluster licenses.

NoteThe maximum licensed number of nodes in a cluster will never exceed the maximum number of nodes supported by the hardware and software of the nodes making up the cluster. Note, however, that the maximum number of nodes available in the VSP N series or VSP Gx00 with NAS modules is two. The NAS module hardware introduced in version 12.6 contains two nodes that are automatically clustered, and no license is required for their use.

A cluster license can be for a single node or for multiple nodes.

  • A single node license allows the server/node on which the license is installed to become the first node in a cluster or to join an existing cluster. Using single node cluster licenses, you can form clusters of up to the maximum number of nodes supported by the hardware and software being used.

    Single node cluster licenses can also be used to increase the maximum number of nodes in an already-formed cluster, up to the supported maximum.

  • A multi-node license allows the cluster on which the license is installed to form a cluster containing up to the licensed number of nodes, or the supported maximum number of nodes, whichever is lower.

    If a server/node containing a multi-node cluster license joins an existing cluster, the cluster’s total licensed number of nodes increases to the higher of the following:

    • The maximum number of nodes licensed by the existing cluster.
    • The maximum number of nodes in the existing cluster’s license plus one.

      This happens when the total size of the cluster is already greater than or equal to the licensed maximum number of nodes in the existing cluster.

NoteThe only difference between a single-node and a multi-node cluster license is the maximum number of nodes the license permits. After installing the license key, you can see the difference between the number of nodes allowed by the license on the License Keys page.

Maximum cluster size can be determined in either of the following ways:

  • A cluster containing a multi-node cluster license, for up to "X" nodes.

    This method is typically used for new larger-scale installations, where a multi-node cluster is being set up as a new installation and the node containing the multi-node license becomes the first cluster node.

  • An additive process, that combines an existing cluster and a node containing a single-node cluster license.

    This method is typically used for installations that are expected to grow over time. The key advantage provided by this additive method is that maximum cluster size need not be determined in advance.

    For example, you can start with a single server without a cluster license. Later, you install a cluster license, configure the server as the first node of the cluster, and then add nodes. In this situation, you could begin with:

    • A multi-node cluster license and then add nodes that don’t have cluster licenses into the cluster.
    • A single-node cluster license and then install additional nodes (each having their own single-node cluster license) into the cluster.

Another situation where this additive process is used would be if you start with a small cluster, and later add nodes to make a larger cluster. For example, if you start with a two-node cluster that has a four-node license, you can later add two servers (that don’t have cluster licenses) to create a four-node cluster. If necessary, you could later grow the cluster by adding individual nodes (each having a single-node cluster license), up to the supported maximum number of nodes.

Assuming that the cluster has fewer nodes than the maximum size supported by the hardware and software, the rules governing the addition of a node to an existing cluster are fairly simple:

  • A node may be added if the licensed maximum number of nodes is greater than or equal to the number of existing nodes, plus one.
  • A node may be added if the licensed maximum number of nodes is equal to the number of existing nodes, and the joining node has a cluster license.

    When joining an existing cluster, if the joining node has a cluster license, that cluster license is transferred to the existing cluster, and the cluster’s maximum number of nodes increases by one (1). The cluster’s maximum number of nodes is increased by one, regardless of the maximum number of nodes allowed by the cluster license of the joining node, even if the joining node has a multi-node cluster license. For this reason, the order of joining nodes into a cluster is important.

When becoming a cluster node, all of its licenses are transferred to the cluster, and different licenses are transferred in different fashions.

Creating a new cluster using NAS Manager

When creating a new cluster, you can use the Cluster Wizard to configure a server as the first cluster node. Then, you can use the Join Cluster Wizard to add a new node to the cluster. The Join Cluster Wizard allows you to add a managed server (a server that is already managed by the NAS Manager) to an existing cluster as the new cluster node.

NoteThe maximum number of nodes available in the VSP Gx00 with NAS modules and VSP N series is two. The NAS module hardware contains two nodes that are automatically clustered, and no license is required for their use.

Configuring the first cluster node

If any of the nodes that you are going to use to form the cluster contain a multi-node cluster license, that node is the one that should be configured as the first cluster node.
NoteThe maximum number of nodes available in the VSP Gx00 with NAS modules and VSP N series is two. The NAS module hardware contains two nodes that are automatically clustered, and no license is required for their use. Quorum device management is also automatic.

Procedure

  1. Navigate to Home Server Settings Cluster Wizard to display the Cluster Wizard page.

  2. Enter a new cluster name, associated cluster node IP address, cluster subnet mask, and select a quorum device.

    NoteWhether creating a new cluster or joining a cluster node, a cluster node IP address must be defined. This IP address maintains heartbeat communication among cluster nodes and between the cluster nodes and the quorum device (QD), which is typically the NAS Manager. Due to the importance of the heartbeat communication, the cluster node IP address should be assigned to the eth1 management port connected to the private management network, keeping the heartbeats isolated from normal network congestion.
  3. Click OK to save the configuration.

    The server reboots automatically. On restart, the node joins the cluster.

Adding a node to an existing cluster using NAS Manager

The server generates the node names using a combination of the cluster-name and the node ID. For example, the first node in the cluster could be named NASCluster-1. When a new node is added, it is important to check that the new name does not conflict with any existing node names in the cluster. For further information, see the cluster-node-rename and cluster-join man pages.

Procedure

  1. Navigate to Home Server Settings Join Cluster Wizard to display the Join Cluster Wizard page.

  2. Select a server, check the suggested IP address for the node (you can change it, if necessary), enter a user name and password, and click next.

    NoteWhen adding a node to an existing cluster, the node being added must be the same model as the nodes already in the cluster.
  3. Allow the system to reboot.

    The selected server will automatically reboot and join the cluster during the boot process.

Configuring the cluster

  1. Navigate to Home Server Settings Cluster Configuration to display the Cluster Configuration page.

    GUID-E6F7E97B-885F-4E67-A8F2-EA88A8B20F25-low.png
  2. As needed, modify the quorum device assignment:

    • Click add to assign a QD to the cluster, if a QD is not specified.
    • Click remove to remove the specified QD.

      If a QD is removed from the cluster, the service will be released back to NAS Manager’s pool of available QDs.

  3. As needed, modify the cluster node assignement:

    NoteServices hosted by the cluster node must be migrated to a different cluster node before a node can be removed.
    • To remove a cluster node, click its details button to display the corresponding Cluster Node page. Click Remove From Cluster, and OK (or cancel to decline) in the confirmation dialog.

      Upon node removal, any hosted EVSs will automatically be migrated to another cluster node, with details provided in the confirmation dialog.

    • To add a node to the cluster, navigate to Home Server Settings Cluster Configuration, and select Cluster Join Wizard to display the Cluster Wizard.

Displaying cluster node details

The Cluster Node Details page displays information about a selected cluster node and allows removal of that node from the cluster.

Procedure

  1. Navigate to Home Server Settings Cluster Configuration, select a node, and click details to display its Cluster Node Details page.

    GUID-142472DD-B540-4EC1-BEE6-38E6B2C7443B-low.png

    Field/Item Description
    Cluster Node Name The cluster node name (label).
    Cluster Node ID The ID assigned to the node.
    Status The status of the SMU.
    Network & Storage
    File Systems The overall status of the file systems:
    • OK. All file systems up and operational.
    • Failed. One or more file systems has failed.

    Click the status link to display the File Systems page, which lists all file systems assigned to the EVS in that cluster node.

    Ethernet Aggregations The status of the Ethernet aggregations in the cluster node:
    • OK. All aggregated ports are up and linked.
    • Degraded. One or more ports in an aggregation has failed.
    • Failed. All ports in an aggregation have failed.

    Click the status link to display the Link Aggregation page, which lists all aggregations in the cluster node.

    Management Network (HNAS server only) The overall status of the management network:
    • OK. Links are up and heartbeats are being received.
    • Failed. No heartbeats are being received, and the links may be up or down.

    Click the status link to display the Ethernet Statistics page, which lists information about the management port and the aggregated Ethernet ports in the cluster node.

    Fibre Channel Connections (HNAS server only) The status of the Fibre Channel ports in the cluster node:
    • OK. All ports up and operational.
    • Degraded. Some ports up and operational, but one or more has failed.
    • Failed. All ports have failed.

    Click the status link to display the Fibre Channel Statistics Per Port page, which lists all Fibre Channel ports in use in the cluster node.

    Cluster Communication This status of communications within the cluster node.

    Cluster Interconnect:

    • OK. Link is up and heartbeats are being received.
    • Standby port down. The primary link is up and heartbeats are being received, but the secondary link is down.
    • Link up, no heartbeating. At least one link is up, but no heartbeats are being received.
    • Link down. All links are down (and therefore no heartbeats are being received).

    Management Network:

    • OK. Both links are up and heartbeats are being received.
    • Link up, no heartbeating. Both links are up, but no heartbeats are being received.
    • Link down. Both links are down (and therefore no heartbeats are being received).

    Quorum Device (HNAS server only):

    • OK. The Quorum Device is communicating with the cluster node.
    • Link up, no quorum communication. The link to the Quorum Device is up, but the Quorum Device is not communicating with the cluster node.
    • Link down. There is no communication with the Quorum Device.
    NoteThe Quorum Device is internal on the NAS module, so Quorum management is automatic.
    Chassis (HNAS server only)
    Power Supply Status The status of the cluster power supply units (PSUs):
    • OK. Both PSUs are installed and operating normally.
    • Not Fitted. One PSU not responding to queries, which may mean that has been removed from the chassis, or is not properly installed in the chassis.
    • Fault or Switched Off. One PSU not responding to queries, and it has failed, been switched off, or is not plugged in to mains power.
    • Unknown. One PSU not responding to queries, and the exact cause cannot be determined.
    Temperature The status of the temperature in the cluster node chassis:
    • OK. Within the normal operating range.
    • Degraded. Above normal, but not yet critical.
    • Failed. Critical.

    When available, the temperature in the chassis is also displayed. The displayed temperature is the highest reported temperature of any of the boards in the chassis.

    Chassis Disks The status of the server's internal hard disks, and the percentage of the server's internal disk space that has been used:
    • OK. Operating normally.
    • Degraded. A non-critical problem has been discovered with one or both of the server's internal hard disks.
    • Failed. A critical problem has been discovered with one or both of the server's internal hard disks.
    Chassis Battery Status (not applicable to Series 5000)The status of the server's battery pack.

    When the indicator is green:

    • OK. Capacity and voltage within the normal operating range.
    • Initialising. PSU battery is initializing after initial installation.
    • Normal Charging. PSU battery is being charged.
    • Cell-Testing. PSU battery is being tested.

    When the indicator is amber:

    • Discharged. Capacity and/or voltage below normal. This status should be considered a warning; if it continues, the PSU battery should be replaced.
    • Low. Capacity or voltage below normal operating level. This status should be considered a warning; if it continues, the PSU battery should be replaced.
    • Not Responding. PSU battery is not responding to queries.

    When the indicator is red:

    • Fault. PSU battery is not holding a charge, has the wrong voltage, or some other fault, and the PSU battery should be replaced.
    • Not Fitted. PSU battery is not detected. Contact your technical support representative for more information.
    • Failed. Capacity and voltage consistently below acceptable minimum, or the PSU battery is not charging, or is not responding to queries. This status indicates a failure; the PSU battery should be replaced.
    • Very Low. Capacity and voltage below acceptable minimum. If this status continues for more than a few hours, it indicates a failure; the PSU battery should be replaced.

    When available, the level of the battery charge also is displayed.

    Fan Speed The status of the fans in the cluster node chassis:
    • OK. All fans operating normally.
    • Degraded. One or more fans spinning below normal range.
    • Failed. At least one fan has stopped completely, or is not reporting status.

    When available, the chassis fan speed is also displayed. The displayed fan speed is the slowest reported speed of any of the three fans. An error message may be displayed, even if it does not correspond with the slowest fan.

    System Uptime The duration since the last reboot of the cluster node.
    System (NAS module only)
    System LUs The status of the server’s logical units and the percentage of the internal disk space that has been used. The Maximum Used value refers to the partition that is using the most space.
    System Uptime The duration since the last reboot of the cluster node.
    EVS The names (labels) and status of the EVSs assigned to the node:
    • Green. Online and operational.
    • Amber. Offline, but listed here because it is hosting the administrative EVS.
    • Red. Failed.

    Click the EVS name to display the EVS Details page for that EVS.

    remove (HNAS server only) Removes the node from the cluster.

Quorum device management (external NAS Manager only)

An external NAS Manager hosts a pool of eight quorum devices (QDs). The NAS Manager provides quorum services for up to eight clusters from its pool of QDs by assigning a QD to a cluster during cluster configuration. After being assigned to a cluster, the QD is “owned” by that cluster and is no longer available for assignment to another cluster. Removing a QD from a cluster releases ownership of the QD and returns the QD to the NAS Manager’s pool of available QDs.

Beginning in NAS Manager software version 10.0, an updated quorum service is available. Depending on version of the NAS server firmware in clusters managed by the NAS Manager, one or both of the following quorum service versions may be required:

  • Quorum Services (also known as legacy Quorum Services) is required by clusters running firmware versions prior to version 10.0.
  • Quorum Services v2 is used by clusters running firmware versions 10.0 and newer.
NoteUnified VSP Gx00 with NAS modules and VSP N series models can have a maximum of two nodes. All other details described here apply to the HNAS server models.

NAS Managers running software version 10.0 and later can simultaneously manage clusters that require legacy Quorum Services and other clusters that require Quorum Services v2. Non-managed servers may also use the quorum services on the NAS Manager.

NoteDuring cluster configuration, the two quorum service versions work together to ensure that one cluster cannot be served by QDs of both quorum services at the same time. When a request is made to assign a QD to a cluster, the quorum service receiving the request first checks if the other quorum service has already assigned a QD to the cluster. If so, the previously assigned QD is removed from the cluster before the quorum service receiving the request assigns a QD to the cluster. This ensures that the old and new quorum services cannot both service the same cluster.

 

  • Was this article helpful?