Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Monitoring Fibre Channel switches (HNAS server only)

HNAS servers allow you to add Fibre Channel (FC) switches to the System Monitor, so you can easily check FC switch connectivity status, which indicates whether the NAS Manager received a response to an Ethernet ping of its last known IP address. The connectivity status does not indicate whether the FC switch has connectivity with the storage subsystem.

When adding an FC switch to the System Monitor, you can associate it with one or more servers. After an FC switch has been associated with a server, you can monitor switch connectivity status, display log events and SNMP traps, download FC switch diagnostic information, and configure emailing of switch-related diagnostic information.

Displaying Fibre Channel switch connectivity status

The System Monitor displays FC switch connectivity status at a glance, and also lists FC switches, which can be selected to display detailed switch information.

Using System Monitor to display switch connectivity status

  1. Navigate to Home Status & Monitoring System Monitor.

    The status indicator next to the FC switch indicates its connectivity status.
    GUID-D1B040AA-EC3C-4766-91B3-F29D63FBF581-low.png

Using NAS Manager to display switch connectivity status

  1. Navigate to Home Storage Management FC Switches.

    GUID-76FCC6FB-2F09-4836-8803-8CEEC133A734-low.jpg

    Field/Item Description
    Name The name of the switch, defined when the switch was added. This name should be sufficiently descriptive as to be able to identify the switch.
    Address The IP address or DNS name of the switch, defined when the switch was added.
    Switch Status An indicator of the connectivity status of the switch. Connectivity status indicators are:
    • Green – OK. A response was received from a ping of the last-known IP address of the switch.
    • Gray – Determining state. A FC switch will appear as gray for up to 60 seconds, immediately after it is added. After a ping of the switch IP address, the status will change to OK or severe (green or red), depending on whether there was a response to the ping.
    • Red – Severe. No response was received from a ping of the IP address of the switch.
    details Displays the FC Switch Details page for the switch. From the FC Switch Details page, you can open the embedded management interface for the switch (if available), and change the switch name or address.
    add Opens the Add FC Switch page.
    delete Deletes one ore more selected FC switches.

Adding FC switches

After adding an FC switch, the NAS Manager displays it in the System Monitor, with connectivity status. Because multiple servers or clusters might use the storage connected to an FC switch, it can be associated with multiple servers or clusters managed by a NAS Manager, thereby appearing in the System Monitor for all servers and cluster to which it has been associated.

Procedure

  1. Navigate to Home Storage Management FC Switches, and click add to display the Add FC Switch page.

    GUID-C45A046F-1B88-4489-8FD9-9C5AE6FA1EC8-low.jpg
  2. Enter the requested information.

    Field/Item Description
    Associate Existing Switch with name (currently managed server) Select an existing switch to associate with the named server or cluster. When you associate a switch with a managed server or a cluster, the switch is added to the system monitor of that server/cluster.
    Monitor Switch Use the list to select the switch you want to associate with the named server/cluster.
    Add New Switch Select to add a new FC switch. After the switch has been added, you can associate it with a managed server or a cluster.
    Name The name you want to use to refer to the switch. This name should be sufficiently descriptive as to be able to identify the switch.
    Host Name/IP Address A fibre channel switch can be specified by IPv4 or IPv6 address, or by a host name. If an IPv6 address is specified, the SMU will only be able to monitor the switch if the SMU is configured with an IPv6 address. Additionally, if the switch is given by host name, and that host name resolves to an IPv6 address, monitoring will only be possible if an IPv6 DNS server is provided.
    Username Enter the user login name for the embedded management interface of the FC switch.
    Password Enter the password associated with the user name for the embedded management interface of the FC switch.
    Use http/https/Telnet/other on port... From the list, select the protocol and port for connecting with the embedded management interface of the FC switch. Defaults are http protocol and port 80. The port number must be in the range 1 - 65535.
    NoteIf http, https, or Telnet, clicking the switch in the System Monitor displays the embedded management interface. If other, the FC Switch Details page is displayed instead of the management interface
  3. Verify your settings, and click OK to save, or cancel to decline.

Displaying or changing details for an FC switch

You can display a list of the FC switches that have been added to the System Monitor of any server or cluster managed by a NAS Manager on the FC Switches page. After you have displayed this list, you can display and change details for a switch.

Procedure

  1. Navigate to Home Storage Management FC Switches, and click detail for a selected switch to display the FC Switch Details page, which lists all FC switches that have been added to the System Monitor of the server/cluster.

    GUID-0CC387C5-C7BD-4E8B-A125-E3DF8FF8EEB5-low.jpg
  2. As needed, display or modify the switch information.

    Field/Item Description
    Management Links This area provides links to the embedded management interfaces for the FC switch. Click a link to open the interface.
    NoteThe FC switch management interface might or might not support multiple concurrent logins. Refer to the documentation for the switch regarding use of the embedded management interface.
    Name Name of the switch, specified when the switch was added. This name should be sufficiently descriptive as to be able to identify the switch.
    Name/IP Address The IP address or DNS name of the switch, specified when the switch was added.
    Username User login name for the embedded management interface of the FC switch.
    Password Password associated with the user name for the embedded management interface of the FC switch.
    Use http/https/Telnet/other on port... Protocol and port for connecting with the embedded management interface of the FC switch. Defaults are http protocol and port 80.
    NoteIf http, https, or Telnet, clicking the switch in the System Monitor displays the embedded management interface. If other, the FC Switch Details page is displayed instead of the management interface.
    OK Saves configuration changes, and closes the page.
    cancel Closes the page without saving configuration changes.
  3. Verify your settings, and click OK to save, or cancel to decline.

Optimizing performance with Performance Accelerator

The Performance Accelerator feature optimizes throughput and IOPS capacity in the NAS Platform system by enabling very-large-scale integration (VLSI) features in the NAS server. Both throughput and IOPS capacities are significantly increased. To maximize throughput in the VLSI, the PCIe connection between the SI fpga and the Tachyon Fibre Channel controller is increased from four to eight lanes. This lane increase doubles the available bandwidth of the connection, providing greater throughput and speed. Performance Accelerator enhances the IOPS component by increasing the number of cache controllers from one to two, within the SI FPGA, maximizing the available amount of cache controller processing power. If a bottleneck previously existed in the PCIe connection to the Tachyon Fibre Channel controller, or to the SI cache controller, Performance Accelerator might reduce or eliminate such a bottleneck.

NotePerformance Accelerator is available only on the NAS Platform 3090 G1 and NAS Platform 3090 G2 servers. Installing Performance Accelerator on other servers has no effect.

Determining if Performance Accelerator will increase system performance

To evaluate the current throughput component, measure the current system throughput. If the current system throughput is close to the “standard” throughput limits, then it is likely that the PCIe connection to the Tachyon Fibre Channel controller is not optimized for performance. Performance Accelerator might bring a performance improvement. The standard read speed, on newer systems equipped with QE4+ Tachyon controllers, is 880 MB/sec; the standard write speed is 800 MB/sec. On older systems, equipped with QX4 Tachyon controllers, the standard read speed is 880 MB/sec; the standard write speed is 640 MB/sec.

For the IOPS component, collect a PIR while the system is under maximum load. Examine the SI utilization by looking at the "si_busy_clocks_last_second_percentage statistic" in the logged-statistics.csv file. If this file shows that the SI FPGA is very busy (at 90 to 100 percent active, with the standard being 72,000 ops/sec), then it is likely that the SI cache controller is not optimized, and Performance Accelerator might significantly improve performance.

Installing Performance Accelerator

Performance Accelerator is enabled by installing its license.

Testing the Performance Accelerator installation

Performance Accelerator enables additional PCIe lanes in the VSLI to connect to the Tachyon Fiber Channel controller. If these lanes have not been previously tested, the server will perform a full power on self test (POST) to ensure that the lanes are working. If the POST test passes, then both components of Performance Accelerator are enabled when the server boots. If the POST test fails, then only the IOPS (dual cache controller) component of Performance Accelerator is enabled, and an error event is generated.

NoteA full POST test is only possible if there is no stale data in NVRAM left over from deleted file systems that had associated NVRAM content. Stale data is cleared from NVRAM by unmounting file systems thoroughly, and using the nvpages list command to inspect for stale data.

Uninstalling Performance Accelerator

  1. Removing the Performance Accelerator license.

  2. Rebooting the server, using the supervisor-level reboot-app command. In a cluster, reboot one node at a time.

Troubleshooting Performance Accelerator

At boot time, Performance Accelerator writes the following line to the server dblog:

Performance Accelerator: licensed 1, tptelc 1, mtds_passed 0, tpcurrent 0, tpprevious 0, dcmode unset

The following table defines the meaning of each field in the line:

Field Description
licensed 1 if Performance Accelerator is licensed, 0 otherwise.
tptelc 1 as long as the throughput component of Performance Accelerator is not disabled by the fci4 telc (see below), 0 otherwise.
mtds_passed 1 if full POST has run and passed, 0 otherwise.
tpcurrent 1 if licensed=1 and tptelc=1 and mtds_passed=1, 0 otherwise.
tpprevious The value of tpcurrent on the previous boot.
dcmode The value of the telc used to force dual code mode behavior. If "unset", the default behavior ("striped") is used, as long as Performance Accelerator is licensed.

Verifying that the throughput component of Performance Accelerator is enabled

Use the dev-level fci-info pciex_status; for example:

mercuryc4(MMB):$ fci-info pciex_status
fc
fc 					pciex_status = 0x27f00006 670040070
fc 				pciex_num_active_lanes : 0x8 8

Note that the output in the last column of the last line, "pciex_num_active_lanes", is "0x8 8", indicating that the PCIe connection between the SI FPGA and the Tachyon Fibre Channel controller is successfully increased from four to eight lanes. The output is "4" if Performance Accelerator is disabled.

Verifying that the IOPS component of Performance Accelerator is enabled

The IOPS component can be verified using the dev-level si-chip config command; for example:

mercuryc4(MMB):$ si-chip config
		config =    0x29980 170368
dual_cache_mode : 0x3 3

The "dual_cache_mode" shows "3" if Performance Accelerator is enabled ("0" if disabled).

Disabling the throughput component

Procedure

  1. Use the dev-level telcset fci4 true command.

  2. Reboot the server, using the supervisor-level reboot-app command. In a cluster, reboot one node at a time.

  3. Set the telc on all nodes.

Next steps

To reenable the throughput component, delete the fci4 telc, and reboot.

Disabling the IOPS component

Procedure

  1. Enter telcset dual_cache_mode primary.

  2. Reboot the server, using the supervisor-level reboot-app command. In a cluster, reboot one node at a time.

  3. Set the telc on all nodes.

Next steps

To reenable the IOPS component, delete the dual_cache_mode telc, and reboot.

If the throughput component is not enabled when the license is installed

If a Performance Accelerator license is installed, but the throughput component is not enabled, the most likely reason is that the eight-lane connection to the Tachyon Fibre Channel controller has not been successfully tested. For the eight-lane connection to be tested, the server must be completely rebooted, using a full system reboot, and there must be no stale data in NVRAM. If these conditions are met, then the full POST test should run on boot (assuming it has not previously passed).

If the full POST test has not previously passed, and if the test is still not running on boot, check that the license is installed, a full system reboot is being performed, and that there is no stale data in NVRAM. Stale data is cleared from NVRAM by unmounting file systems thoroughly, and using the nvpages list command to inspect for stale data.

If the full POST test is running and failing, it might indicate a fault in the server.

The following events are logged by Performance Accelerator:

Event Description
Performance Accelerator throughput enabled When Performance Accelerator throughput is enabled, when it was previously disabled.
Performance Accelerator throughput disabled When Performance Accelerator throughput was enabled but now is not.
Cannot enable Performance Accelerator throughput When Performance Accelerator is licensed, but POST was not able to run, or it failed to run.

 

  • Was this article helpful?