Skip to main content
Hitachi Vantara Knowledge

Performance Data Collection - Open Systems Requirements

Objective

  • Enterprise open systems performance data collection

Environment

  • Virtual Storage Platform (VSP)
  • Hitachi Unified Storage VM (HUS VM)
  • Universal Storage Platfrom V (USP V)
  • Universal Storage Platform VM (USP VM)
  • Universal Storage Platform (USP)
  • Hitachi Network Storage Controller 55 (NSC55)
  • Hitachi Lightning 9900 V Series enterprise storage systems (9900 V)

Steps

Please collect the following information for performance analysis by the Hitachi Vantara (Hitachi) Global Support Center (GSC). If you need help, ask your local Hitachi Hardware Engineer (CE) and Hitachi Systems Engineer (SE). Collect all the information in the order shown. Ideally, you should collect a Mode 31 detailed dump and performance export data.

  • Note that typically short range performance data is only available for up to 24 hours.  Therefore it is very important that data is collected soon, during or after any performance impact.
  1. Brief description and timeline of the performance problem
  2. Data collection from Hitachi Performance Monitor
  3. Time difference between storage and server(s)
  4. Detail dump from storage array
  5. External storage information (if applicable)
  6. Remote copy information (if applicable)
  7. Operating system (OS) performance Information (useful)
  8. SAN Ccnfig Information (useful)
  9. Hitachi Dynamic Tiering (if applicable)
  10. Additional Data may be required

Once collected, upload the data collection to TUF.

Brief Description and Timeline of the Performance Problem

Description. This is very important. Please remember, we do not know your server naming conventions nor which servers are connected to which ports. Please answer all these questions:

  1. What are the customer's concerns?
  2. What server(s) are affected (single or multiple)?
  3. What are the OS?
  4. If the OS is Microsoft® Windows®, specify the LUN-to-drive letter relationship.
  5. Are you in a cluster configuration?
  6. Which ports, host storage domains, LUNs, array groups, logical devices (LDEV), and/or pools are having performance issues?
  7. What types of applications are affected?
  8. Is replication used - Hitachi True Copy, TCA, Hitachi Universal Replicator (HUR), Hitachi Copy-on-Write Snapshot, Hitachi ShadowImage?
  9. Is there Hitachi external storage that could be affected by this problem? If yes, provide model serial number and see Step 7.

Timeline. Sometimes performance is good at certain times of the day, and bad at other times. We need to know the exact times at which it was good, as well as the exact times at which it was bad. Please answer all these questions:

  1. Does the problem only occur at certain times or days of the week/month?
  2. At what time(s) does the problem start?
  3. At what time(s) does the problem go away?
  4. Detail the exact timeline of all events before, during and after the event.

Please be as specific as possible.

Data Collection from Performance Monitor

Supply the Performance Monitor data using the instructions How to Run the Export Tool for Enterprise Performance Data Collection.

Time Difference Between Storage and Server(s)

Supply both of the following:

  1. The log from the export tool. The log contains the time difference between the server collecting the Performance Monitor data and the SVP when the export tool is run on a customer PC/server. The log also shows us what data was requested and any errors in the export process.
  2. Time difference between the SVP and the time on the host with the performance problem.

Ideally, the export tool is run on a customer PC/server rather than on the SVP. If the CE runs the export tool on the SVP, the CE must supply the time difference as well as the log from the export tool.

In almost all cases, the "SVP time" is not the same as "server time". We have seen many cases where the SVP is set to be GMT - and the server is correctly set to the local time zone!

We suggest that you disable Performance Monitor data collection 1-2 hours after the USP dumps so that data is not lost in the event of problems running the export tool.

Detail Dump from Storage Array

Supply the following:

  • Detail dump of the subsystem. The dump must be taken within 24 hours of the problem.

You collect the dump in different ways, depending on the current situation:

  • If the problem is still happening, turn on MODE 31 and after 5 minutes take a detailed dump of the USP system. Turn MODE 31 off when the dump has been taken.
  • If the problem has gone away, turn on MODE 31 and take a detail dump a few minutes later. This needs to happen as soon as possible after the incident. Turn MODE 31 off when the dump has been taken.
  • If the problem can be recreated, turn on MODE 31, turn on Performance Monitor, turn on other available performance tools if applicable, and take a detailed dump about 1 hour after the problem starts to happen again - and preferably while the problem is happening. Turn MODE 31 off when the dump has been taken.
  • If Remote Copy is suspected, turn on MODE 31 on the RCU and take a detailed dump of the RCU a few minutes later. If Performance Monitor is available on the RCU, then capture its performance data as well. Turn MODE 31 off when the dump has been taken.

Only the Hitachi Data Systems hardware engineer can take a dump from a RAID subsystem. Please ask your Hitachi hardware engineer to do this for you.

NOTE: If MODE 31 has been turned on, remember to turn it off after the detailed dump completes. MODE 31 can have a performance impact on some workload types and should not be left on unless specifically requested by GSC.

External Storage Information (If Applicable)

When external storage is attached to the USP, USP V, VSP, HUS VM, we need:

  • Configuration diagram or description of the connectivity between the USP and external storage
  • Detailed description of the usage/purpose of the external storage subsystem
  • The time difference between the USP SVP clock and the external subsystem

If the external storage is a 9500 V, AMS, WMS, AMS 2000, HUS, we need:

  • 60-120 collections of DF performance statistics using one-minute intervals and covering the time of the RAID detailed dump
  • Constitute files for RAID group, system parameters and host group
  • Simple trace synchronized with USP/VSP dump
  • For more information on DF performance, go here.

If external storage is a 9900 V, we need:

  • MODE 31 detail auto dump synchronized with the USP/VSP dump
  • Performance Monitor data

Remote Copy Information (If Applicable)

If the performance problems involve remote copy, provide the data as specified below.

  • Remote Copy Performance Data Collection

NOTE: It is important to synchronize the data collection procedures between the main control unit (MCU) and remote control unit (RCU) (within 6 hours of each other). Performance data should be collected within 23 hours of the problem.

Other useful data

The list above is mandatory. In addition, the following items should also be supplied as soon as possible.

SAN Configuration

For SAN configuration information, provide the following:

  • SAN diagram
  • Switch log output (if connection is through switches)

Dynamic Tiering

Downloading the tier relocation log file:

1. In the Storage Systems tree on the left pane of the top window, select Pool

  • The Pool window appears. Click Tier Relocation Log.
  • The progress dialog box appears.

2. Click OK.

  • A dialog box opens allowing you to select where to download the file.

3. Specify the folder to download and click Save.

  • When you change a file name from the default, the file name may not contain an extension. Confirm that the file name contains the .tsv extension to save a file.