The system contains several servers that operate as a cluster. Clusters that use more than two servers include two 10 Gbps Ethernet switches. Hitachi Vantara supports two switches for redundancy.
|System management unit (SMU)||
The SMU is the management component for the other components in a system. An SMU provides administration and monitoring tools. It supports data migration and replication, and acts as a quorum device in a cluster configuration. Although integral to the system, the SMU does not move data between the network client and the servers.
In clustered systems, an external SMU provides the management functionality. In some cases, multiple SMUs are advisable.
|Storage systems||A Hitachi NAS Platform system or a Hitachi Unified Storage File Module system can control several storage enclosures. The maximum number of storage enclosures in a rack depends on the model of storage enclosures being installed. Refer to the Storage Subsystem Administration Guide for more information on supported storage systems.|
|Fibre Channel (FC) switches||
The server supports FC switches that connect multiple servers and storage systems.
See Hitachi Vantara Support Connect for information about which FC switches are supported.
|External 10 Gigabit Ethernet (10 GbE) switches||
All cluster configurations require an external Ethernet switch.
See Hitachi Vantara Support Connect for information about the 10 GbE switches that have been qualified for use with the system, and to find out about the availability of those switches.
|10 GbE switches||
The server connects to a 10 GbE switch for connection with the public data network (customer data network).
Also, a 10 GbE switch is required for internal cluster communications for clusters of three or more nodes.
See Hitachi Vantara Support Connect for information about the 10 GbE switches that have been qualified for use with the server, and to find out about the availability of those switches.
Hitachi Vantara requires dual 10 GbE switches for redundancy. In a dual-switch configuration, if one switch fails, the cluster nodes remain connected through the second switch.