Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

About Performance Monitor

Overview of Hitachi Performance Monitor

Hitachi Performance Monitor enables you to monitor your storage system and collect detailed usage and performance statistics. You can view the data in lists and on graphs to identify changes in usage rates and workloads, analyze trends in disk I/O, and detect peak I/O times. For example, if there is a decrease in performance, such as delayed host response times, you can use Performance Monitor to discover the reason for the decrease and determine the actions to take to improve performance.

Performance Monitor collects data about storage system resources such as drives, volumes, and microprocessors as well as statistics about front-end (host I/O) and back-end (drive I/O) workloads. You can perform the following types of monitoring depending on the storage system:

  • You can perform both short-range monitoring and long-range monitoring. For both long-range monitoring and short-range monitoring, the data is collected when Monitoring Switch is set to Enabled, and the data is not collected when Monitoring Switch is set to Disabled. You specify when and how often the data is collected.
  • (VSP E series) The data is collected when Monitoring Switch is set to Enabled, and you can specify when and how often the data is collected.

Using the Performance Monitor data, you can manage and fine-tune the performance of your storage system using the performance management software products.

Requirements for using performance functions

The following lists and describes the system requirements and permissions for using the performance management functions.
  • License keys for performance management: The license keys for the following software products must be installed on the storage system:
    • Performance Monitor
    • Server Priority Manager
    • Virtual Partition Manager

    For details about installing license keys, see the System Administrator Guide.

  • Access privileges for Device Manager - Storage Navigator: Administrator access for Device Manager - Storage Navigator or write access for the performance management software products is required to perform operations. Users without Administrator access or write access can only view the performance management information and settings. You need specific administrator roles to use the following functions:
    • Performance Monitor: Storage Administrator (Performance Management)
    • Server Priority Manager, Virtual Partition Manager: Storage Administrator (System Resource Management)
  • Java: Java is required to use Server Priority Manager on the Device Manager - Storage Navigator computer. For details about installing Java and configuring Device Manager - Storage Navigator, see the System Administrator Guide.
  • Secondary windows on Device Manager - Storage Navigator:

    You must enable secondary windows if you plan to use any of the following functions in Device Manager - Storage Navigator (HDvM - SN):

    • Login Message function
    • Data Retention Utility
    • Server Priority Manager
    • Compatible PAV
    • Compatible XRC
    • Volume Retention Manager
    Java and some settings of Device Manager - Storage Navigator are required for the secondary windows. For details about enabling and using the secondary windows, see the System Administrator Guide.
  • Cache memory for Virtual Partition Manager: Use of Virtual Partition Manager might require additional cache memory in your storage system.

Cautions and restrictions for monitoring

  • Performance monitoring switch

    When the performance monitoring switch is set to disabled, monitoring data is not collected.

  • Changing the SVP time setting

    If the SVP time setting is changed while the monitoring switch is enabled, the following monitoring errors can occur:

    • Invalid monitoring data appears.
    • No monitoring data is collected.

    If you have changed the SVP time setting, disable the monitoring switch, and then re-enable the monitoring switch. Next, obtain the monitoring data. For details about the monitoring switch, see Starting monitoring.

    When time synchronization with the SNTP server or auto summer time is enabled on the SVP, the time is automatically adjusted. If the adjusted time difference is large, the invalid value (-1) might be output as the monitoring data because correct monitoring data cannot be obtained.

  • WWN monitoring

    About the traffic monitoring between host bus adapters and storage system ports in Performance Monitor

    You must configure some settings before the traffic between host bus adapters and storage system ports can be monitored. For details, see Adding new WWNs to monitor, Adding WWNs to ports, and Connecting WWNs to ports.

    NoteWhen you are using Server Priority Manager in Command Control Interface, you cannot perform the setting required for WWN monitoring.
  • Parity group monitoring

    To correctly display the performance statistics of a parity group, all volumes belonging to the parity group must be specified as monitoring targets.

  • Storage system maintenance

    If the storage system is undergoing the following maintenance operations during monitoring, the monitoring data might not be valid, or the invalid value (-1) might be output because monitoring data cannot be obtained normally:

    • Adding, replacing, or removing data drives
    • Changing the storage system configuration
    • Replacing the firmware
    • Formatting or quick-formatting logical devices
    • Adding, replacing, or removing MP unit
    • Replacing controllers
  • Storage system power-off

    If the storage system is powered off during monitoring, monitoring stops until the storage system is powered on again. Monitoring resumes when the storage system is powered on again. However, Performance Monitor cannot display information about the period while the storage system is powered off. Therefore, the monitoring data immediately after powering on again might contain extremely large values.

  • firmware replacement

    After the firmware is replaced, monitoring data is not stored until the service engineer releases the SVP from Modify mode. Therefore, inaccurate data might be temporarily displayed.

Cautions and restrictions for usage statistics

  • Retention of short-range and long-range usage statistics

    Usage statistics for the last six months (186 days) are displayed in long-range monitoring, and usage statistics for up to the last 15 days are displayed in short-range monitoring. Usage statistics outside of these ranges are deleted from the SVP. In short range monitoring, results are retained for the last 1 to 15 days depending on the specified sampling interval. If the retention period has passed since a monitoring result was obtained, the previous result has been deleted from the SVP and cannot be displayed.

  • (VSP E series) Retention of usage statistics

    Usage statistics for up to the last 15 days are displayed in monitoring. Usage statistics outside of these ranges are deleted from the SVP.

  • Statistics for periods of high I/O workload

    If the host I/O workload is high, the storage system gives higher priority to I/O processing than to monitoring. If this occurs, some monitoring data might be missing. If monitoring data is missing frequently, use the Edit Monitoring Switch window to lengthen the sampling interval. For details, see Starting monitoring.

  • Volumes and CU ranges

    The volumes to be monitored by Performance Monitor are specified by control unit (CU). If the range of used CUs does not match the range of CUs monitored by Performance Monitor, usage statistics might not be collected for some volumes.

  • Reverse resync operations

    When you run the CCI horctakeover command, the pairresync-swaps command for a UR pair, or the BCM YKRESYNC REVERSE command for a URz pair, the primary and secondary volumes are swapped. You can collect the before-swapped information immediately after you run any of the commands. Invalid monitoring data will be generated for a short time but will be corrected automatically when the monitoring data gets updated. The invalid data will temporarily be generated when the volume used for a secondary volume is used as a primary volume after a UR pair or URz pair is deleted.

  • When SVP High Reliability Kit is installed and the SVP is duplexed, if you switch the master SVP and the standby SVP, the long-range monitoring data is kept, but the short-range monitoring data is deleted. If you ask maintenance personnel to switch the master and standby SVPs for microcode upgrade or other maintenance purposes, execute Export Tool beforehand as necessary, and acquire the short-range monitoring data.

  • Display of monitoring data immediately after monitoring starts or immediately after the sampling interval is changed

    Monitoring data cannot be displayed within the first two sampling intervals after the monitoring starts or the sampling interval is changed because no monitoring data has accumulated. For instance, if the sampling interval is set or changed to 15 minutes, monitoring data is not accumulated for up to 29 minutes after this setting is made.

  • Display of monitoring data during high SVP workload

    If the SVP is overloaded, the system might require more time than the sampling interval allows to update the display of monitoring data. If this occurs, a portion of monitoring data is not displayed. For example, suppose that the sampling interval is 1 minute, and the display in the Performance Management window is updated at 9:00 and the next update occurs at 9:02. In this case, the window (including the graph) does not display the monitoring result for the period of 9:00 to 9:01. This situation can occur when the following maintenance operations are performed on the storage system or on the Device Manager - Storage Navigator PC:

    • Adding, replacing, or removing cache memory.
    • Adding, replacing, or removing data drives.
    • Changing the storage system configuration.
    • Replacing the firmware.
  • Pool-VOLs

    Pool-VOLs of Thin Image, Dynamic Provisioning, and Dynamic Provisioning for Mainframe are not monitored.

  • Margin of error

    The monitoring data might have a margin of error.

Data collected by Hitachi Performance Monitor

Hitachi Performance Monitor collects and displays monitoring data.

Monitoring data

The following table lists the objects that can be monitored and specifies the data that is collected for each monitoring object. You can specify the objects that are displayed in the graphs in the Performance Objects in the Monitor Performance window. When the resource group feature is installed, you can specify the objects to be displayed in the graphs only when the resources that are shown in the necessary resources in the following table are allocated.

The monitoring data for each sampling interval is the average value of the data over the data sampling interval. The sampling interval is as follows:

  • The sampling interval is 1 to 15 minutes for Short Range and 15 minutes for Long Range.
  • (VSP E series) The sampling interval is 1 to 15 minutes.

The monitoring data shows the information by each resource ID even when the volume is in a virtual storage machine (not by virtual ID). For instructions on viewing the monitoring data, see Using the Performance Monitor data graphs.

Object of monitoring

Monitoring data

Necessary resources

Controller

Usage rates of MPs (%)

None

Usage rates of DRR (%)

None

Cache

Usage rates of cache (%)

None

Write pending rates (%)

Access path

Usage rates of access path between HIEs and ISWs (%)

None

Usage rates of access path between MP units and HIEs (%)

Fibre port (Target)

Throughput (IOPS)

Port

Data transfer (MB/s)

Response time (ms)

Fibre port

(Initiator)

Throughput (IOPS)

Port

Data transfer (MB/s)

Response time (ms)

Mainframe fibre port

Throughput (IOPS)

Port

Data transfer (MB/s)

Response time (ms)

CMR delay time (ms)

Disconnected time (ms)

Connected time (ms)

HTP port open exchange (count/sec)

iSCSI Port (Target)

Throughput (IOPS)

Port

Data transfer (MB/s)

Response time (ms)

iSCSI port

(Initiator)

Throughput (IOPS)

Port

Data transfer (MB/s)

Response time (ms)

WWN

Throughput of WWN (IOPS)

Port

Data transfer of WWN (MB/s)

Response time of WWN (ms)

Throughput of port (IOPS)

Data transfer of port (MB/s)

Response time of port (ms)

LDEV (base)

Total throughput (IOPS)

LDEV

Read throughput (IOPS)

Write throughput (IOPS)

Cache hit (%)

Data transfer (MB/s)

Response time (ms)

Back transfer (count/sec)

Drive usage rate (%)1

Drive access rate (%)1

ShadowImage usage rates (%)1,2

LDEV (UR/ URz)

Write host I/O throughput (IOPS)

LDEV

Write host I/O data transfer (MB/s)

Initial copy cache hit (%)

Initial copy data transfer (MB/s)

LDEV (TC/TCz/GAD)

RIO (count)

LDEV

Pair Synchronized (%)

Differential track (count)

Initial copy throughput (count)

Initial copy data transfer (MB/s)

Initial copy response time (ms)

Update copy throughput (count)

Update copy data transfer (MB/s)

Update copy response time (ms)

Parity group

Total throughput (IOPS)

Parity group

Read throughput (IOPS)

Write throughput (IOPS)

Cache hit (%)

Data transfer (MB/s)

Response time (ms)

Back transfer (count/sec)

Drive usage rate (%)1

LUN (base)4

Total throughput (IOPS)

  • Host group
  • LDEV

Read throughput (IOPS)

Write throughput (IOPS)

Cache hit (%)

Data transfer (MB/s)

Response time (ms)

Back transfer (count/sec)

LUN (UR)3

Write host I/O throughput (IOPS)

  • Host group
  • LDEV

Write host I/O data transfer (MB/s)

Initial copy cache hit (%)

Initial copy data transfer (MB/s)

LUN (TC/GAD)3

RIO (count)

  • Host group
  • LDEV

Pair Synchronized (%)

Differential track (count)

Initial copy throughput (count)

Initial copy data transfer (MB/s)

Initial copy response time (ms)

Update copy throughput (count)

Update copy data transfer (MB/s)

Update copy response time (ms)

External storage

Data transfer between the storage system and external storage per logical device (MB/s)

LDEV

Response time between the storage system and external storage per logical device (ms)

Data transfer between the storage system and external storage per external volume group (MB/s)

Parity group

Response time between the storage system and external storage per external volume group (ms)

Entire storage system (TC/TCz/GAD)

RIO (count)

None

Pair Synchronized (%)

Differential track (count)

Initial copy throughput (count)

Initial copy data transfer (MB/s)

Initial copy response time (ms)

Update copy throughput (count)

Update copy data transfer (MB/s)

Update copy response time (ms)

Journal (UR/URz)

Write host I/O throughput (IOPS)

None

Write host I/O data transfer (MB/s)

Initial copy cache hit (%)

Initial copy data transfer (MB/s)

Master journal throughput (IOPS)

Master journal journal (count/sec)

Master journal data transfer (MB/s)

Master journal response time (ms)

Master journal usage data (%)

Master journal metadata usage rate (%)

Restore journal throughput (IOPS)

Restore journal journal (count/sec)

Restore journal data transfer (MB/s)

Restore journal response time (ms)

Restore journal usage data (%)

Restore journal metadata usage rate (%)

Entire storage system (UR/URz)

Write host I/O throughput (IOPS)

None

Write host I/O data transfer (MB/s)

Initial copy cache hit (%)

Initial copy data transfer (MB/s)

Master journal throughput (IOPS)

Master journal journal (count/sec)

Master journal data transfer (MB/s)

Master journal response time (ms)

Restore journal throughput (IOPS)

Restore journal journal (count/sec)

Restore journal data transfer (MB/s)

Restore journal response time (ms)

Note:

  1. Only information on internal volumes is displayed. Information on external volumes and FICON® DM volumes is not displayed.
  2. Includes usage rates for ShadowImage for Mainframe.
  3. The same value is output to all LUNs mapped to the LDEV.
  4. The monitoring data can be collected only for a LUN on the open system. This monitoring data cannot be collected for a LUN with an NVMe connection and a mainframe connection.

Usage rates of MPs

Function

The usage rate of the MP shows the usage rate of an MP assigned to a logical device. If a usage rate of an MP is high, I/Os concentrate to an MP. Examine the distribution of I/Os to other MP unit.

Storing period

Short-Range (from 1 to 15 minutes) or Long-Range (fixed at 15 minutes) can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Controller

MP

Usage Rate (%)

None

Usage rate of DRRs

Function

A data recovery and reconstruction processor (DRR) is a microprocessor (located on the DKBs and CHBs) that is used to generate parity data for RAID 5 or RAID 6 parity groups. The DRR uses the formula "old data + new data + old parity" to generate new parity.

If the monitor data shows high DRR usage overall, perform either of the following operations to distribute the workload for the system:

  • Move a volume whose write usage rate is high (especially, sequential write usage rate) from a RAID-5 (or RAID-6) parity group to a RAID-1 parity group.
  • Move the data to another storage system.

Use Volume Migration to move a volume. For details on Volume Migration, contact customer support.

If the monitor data shows relatively high DRR usage overall, the performance of the system might not be improved even after moving a volume using Volume Migration.

Storing period

Short-Range or Long-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Controller

DRR

Usage Rate (%)

None

Usage rate of cache memory

Function

When you display monitoring results in a short range sampling interval (VSP E series), the window displays the usage rates about the cache memory for the specified period of time.

Storing period

Short-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Cache

None

Usage Rate (%)

None

Write pending rates

Function

The write pending rate indicates the ratio of write pending data to the cache memory capacity. It is expressed as a percentage of the cache memory capacity used for write pending. The Monitor Performance window displays the average and the maximum write pending rate for the specified period of time.

Storing period

Short-Range or Long-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Cache

None

Write Pending Rate (%)

None

Storage system throughput

Function

Total throughput is the sum of I/Os per second. The read throughput is I/Os to the disk per second when the file read processing is performed. The write throughput is I/Os to the disk per second when the file write processing is performed.

Throughput in the following modes can be displayed:

  • Sequential access mode
  • Random access mode
  • Cache fast write (CFW) mode
  • Total value in the above-mentioned modes
Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Fibre port1

Target

Initiator

Throughput (IOPS)

None

Mainframe fibre port1

None

Throughput (IOPS)

None

iSCSI Port1

Target

Initiator

Throughput (IOPS)

None

WWN1

WWN

Throughput (IOPS)

None

Port

Throughput (IOPS)

None

Logical device1

Base

Total Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

Read Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

Write Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

TC/TCz/GAD

Initial copy

Throughput (count)2

Update copy

Throughput (count)2

UR/URz

Write Host I/O

Throughput (IOPS)

Parity Group1

None

Total Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

Read Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

Write Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

LUN3

Base

Total Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

Read Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

Write Throughput (IOPS)

  • Total
  • Sequential
  • Random
  • CFW

TC/GAD

Initial copy

Throughput (count)2

Update copy

Throughput (count)2

UR

Write host I/O

Throughput (IOPS)

Journal

UR/URz

Write host I/O

Throughput (IOPS)

Master journal

Throughput (IOPS)

Restore journal

Throughput (IOPS)

Entire Storage System

TC/TCz/GAD

Initial copy

Throughput (count)2

Update copy

Throughput (count)2

UR/URz

Write host I/O

Throughput (IOPS)

Master journal

Throughput (IOPS)

Restore journal

Throughput (IOPS)

Note:

  1. Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.
  2. The total number of accesses is displayed.
  3. The same value is output to all LUNs mapped to the LDEV.

Data transfer rate

Function

The amount of data transferred from the host server per second. The data transfer rate for both read data and write data can be monitored.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Fibre port*

Target

Initiator

Data Trans. (MB/s)

None

Mainframe fibre port*

None

Data Trans. (MB/s)

Total

Read

Write

iSCSI Port*

Target

Initiator

Data Trans. (MB/s)

None

WWN*

WWN

Data Trans. (MB/s)

None

Port

Data Trans. (MB/s)

None

Logical device*

Base

Data Trans. (MB/s)

Total

Read

Write

TC/TCz/GAD

Initial Copy

Data Trans. (MB/s)

Update Copy

Data Trans. (MB/s)

UR/URz

Write Host I/O

Data Trans. (MB/s)

Initial Copy

Data Trans. (MB/s)

Parity Group*

None

Data Trans. (MB/s)

Total

Read

Write

LUN*

Base

Data Trans. (MB/s)

Total

Read

Write

TC/GAD

Initial Copy

Data Trans. (MB/s)

Update Copy

Data Trans. (MB/s)

UR

Write Host I/O

Data Trans. (MB/s)

Initial Copy

Data Trans. (MB/s)

External Storage

Parity Group

Data Trans. (MB/s)

Total

Read

Write

Logical Device

Data Trans. (MB/s)

Total

Read

Write

Journal

UR/URz

Write host I/O

Data Trans. (MB/s)

Initial copy

Data Trans. (MB/s)

Master journal

Data Trans. (MB/s)

Restore journal

Data Trans. (MB/s)

Entire Storage System

TC/TCz/GAD

Initial copy

Data Trans. (MB/s)

Update copy

Data Trans. (MB/s)

UR/URz

Write host I/O

Data Trans. (MB/s)

Initial copy

Data Trans. (MB/s)

Master journal

Data Trans. (MB/s)

Restore journal

Data Trans. (MB/s)

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Usage rates of access paths

Function

The access paths are the paths through which data and commands are transferred within a storage system. As shown in the following figure, data is transferred between controllers through the HIE packages in the storage system.

Performance Monitor tracks and displays the usage rate for the following access paths to determine if the transfer route becomes a bottleneck due to the internal transfer.

  • Access paths between the MP unit and the HIE package (MPU-HIE)
  • Access paths between the HIE package and the Interconnect Switch (HIE-ISW)
GUID-0EAFF507-4490-4C3C-9103-5FC3B2EF248F-low.png
Storing period

Short-Range or Long-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Access path

HIE-ISW

Usage Rate (%)

None

MP unit-HIE

Usage Rate (%)

None

Response times

Function

Time (in milliseconds) for replying from an external volume group when I/O accesses are made from your storage system to the external volume group. The average response time in the period specified at Monitoring Term is displayed.

Items that can be monitored response times are ports, WWNs, LDEVs, parity groups, LUNs, and external storages (parity groups and LDEVs).

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Fibre port*

None

Target and Initiator (VSP E series)

Response Time (ms)

None

Mainframe fibre port*

None

Response Time (ms)

None

iSCSI Port*

None

Target and Initiator (VSP E series)

Response Time (ms)

None

WWN*

WWN

Response Time (ms)

None

Port

Response Time (ms)

None

Logical device*

Base

Response Time (ms)

Total

Read

Write

TC/TCz/GAD

Initial Copy

Response Time (ms)

Update Copy

Response Time (ms)

Parity group*

None

Response Time (ms)

Total

Read

Write

LUN*

Base

Response Time (ms)

Total

Read

Write

TC/GAD

Initial Copy

Response Time (ms)

Update Copy

Response Time (ms)

External Storage

Parity Group

Response Time (ms)

Total

Read

Write

Logical Device

Response Time (ms)

Total

Read

Write

Journal

UR/URz

Master Journal

Response Time (ms)

Restore Journal

Response Time (ms)

Entire Storage System

TC/TCz/GAD

Initial Copy

Response Time (ms)

Update Copy

Response Time (ms)

UR/URz

Master Journal

Response Time (ms)

Restore Journal

Response Time (ms)

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

CMR delay time

Function

When I/O access from the storage system is made to the monitoring object port, command response (CMR) delay time shows the time (in milliseconds) from the I/O access to the return of a command response from the port.

Storing period

Short-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Mainframe fibre port*

None

CMR delay Time (ms)

None

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Disconnected time

Function

When I/O access is made from the storage system to the monitoring object port, Disconnected time shows the time (in milliseconds) during which processing is interrupted because of I/O processing to the data drives.

Storing period

Short-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Mainframe fibre port*

None

Disconnected Time (ms)

None

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Connected time

Function

Connected time shows the time (in milliseconds) obtained by subtracting the CMR delay time and the disconnected time from the response time.

Storing period

Short-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Mainframe fibre port*

None

Connected Time (ms)

None

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

HTP port open exchanges

Function

HTP port open exchanges shows the number of open exchanges for the monitoring object port. The number of open exchanges is the average number of active I/O accesses at the monitoring object port.

Storing period

Short-Range can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Mainframe fibre port*

None

HTP Port Open Exchanges (count/sec)

None

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Cache hit rates

Function

The cache hit rate is a rate that the input or output data of the disk exists in the cache. The cache hit rate is displayed for the sequential access mode, the random access mode, the cache fast write (CFW) mode , and the entire these modes.

  • Read hit ratio

    For a read I/O, when the requested data is already in cache, the operation is classified as a read hit. For example, if ten read requests have been made from hosts to devices in a given time period and the read data was already on the cache memory three times out of ten, the read hit ratio for that time period is 30 percent. A higher read hit ratio implies higher processing speed because fewer data transfers are made between devices and the cache memory.

  • Write hit ratio

    For a write I/O, when the requested data is already in cache, the operation is classified as a write hit. For example, if ten write requests were made from hosts to devices in a given time period and the write data was already on the cache memory three cases out of ten, the write hit ratio for that time period is 30 percent. A higher write hit ratio implies higher processing speed because fewer data transfers are made between devices and the cache memory.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical Device*

Base

Cache Hit (%)

  • Read (Total)
  • Read (Sequential)
  • Read (Random)
  • Read (CFW)
  • Write (Total)
  • Write (Sequential)
  • Write (Random)
  • Write (CFW)

UR/URz

Initial Copy

Cache Hit (%)

Parity Group*

None

Cache Hit (%)

  • Read (Total)
  • Read (Sequential)
  • Read (Random)
  • Read (CFW)
  • Write (Total)
  • Write (Sequential)
  • Write (Random)
  • Write (CFW)

LUN*

Base

Cache Hit (%)

  • Read (Total)
  • Read (Sequential)
  • Read (Random)
  • Read (CFW)
  • Write (Total)
  • Write (Sequential)
  • Write (Random)
  • Write (CFW)

UR

Initial Copy

Cache Hit (%)

Entire Storage System

UR/URz

Initial Copy

Cache Hit (%)

Journal

UR/URz

Initial Copy

Cache Hit (%)

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Back-end performance

Function

The back-end transfer can be monitored. The back-end transfer is the number of data transfers between the cache memory and the data drive. The graph contains following information.

  • Cache to Drive

    The number of data transfers from the cache memory to data drives.

  • Drive to Cache Sequential

    The number of data transfers from data drives to the cache memory in sequential access mode

  • Drive to Cache Random

    The number of data transfers from data drives to the cache memory in random access mode

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical Device*

Base

Back Trans. (count/sec)

Total

Cache to Drive

Drive to Cache (Sequential)

Drive to Cache (Random)

Parity Group*

None

Back Trans. (count/sec)

Total

Cache to Drive

Drive to Cache (Sequential)

Drive to Cache (Random)

LUN*

Base

Back Trans. (count/sec)

Total

Cache to Drive

Drive to Cache (Sequential)

Drive to Cache (Random)

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Drive usage rates

Function

The usage rates of the data drive of each LDEV or parity group can be displayed.

Storing period

Short-Range or Long-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical Device*

Base

Drive Usage Rate (%)

None

Parity Group*

None

Drive Usage Rate (%)

None

*Only information on internal volumes is displayed. Information about external volumes, FICON® DM volumes, and virtual volumes such as DP-VOL and Thin Image V-VOLs is not displayed.

Data drive access rates

Function

The data drive access rate shows the access rate of each data drive.

The rate of the file reading Read (Sequential) or the file writing Write (Sequential) processing of the data drive in the sequential access mode is displayed.

The rate of file reading Read (Random) or file writing Write (Random) processing of the data drive in the random access mode is displayed.

Storing period

Long-Range or Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical device*

Base

Drive Access Rate (%)

Read (Sequential)

Read (Random)

Write (Sequential)

Write (Random)

*Only information on internal volumes is displayed. Information about external volumes, FICON® DM volumes, and virtual volumes such as DP-VOL and Thin Image V-VOLs is not displayed.

ShadowImage usage statistics

Function

The access rate of volume by ShadowImage and ShadowImage for Mainframe can be displayed by the percentage of the processing of the program to all processing of the physical drives, for each volume. This value is found by dividing access time to physical drives by the program by all access time to physical drives.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical device1

Base

ShadowImage (%)2

None

Note:

  1. Only information on internal volumes is displayed. Information about external volumes, FICON® DM volumes, and virtual volumes such as DP-VOL and Thin Image V-VOLs is not displayed.
  2. Information for ShadowImage and ShadowImage for Mainframe is displayed.

Remote I/O (RIO)

Function

Information about LDEV performance is shown through the total number of remote I/Os from P-VOL to S-VOL for TrueCopy, TrueCopy for Mainframe, and global-active device pairs.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical Device*

TC/TCz/GAD

RIO (count)

Total

Write

Error

LUN*

TC/GAD

RIO (count)

Total

Write

Error

Entire Storage System

TC/TCz/GAD

RIO (count)

Total

Write

Error

* Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.

Pair Synchronized

Function

The synchronization rate between P-VOL and S-VOL is shown as (%) for TrueCopy, TrueCopy for Mainframe, and global-active device pairs.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical Device1,2

TC/TCz/GAD

Pair Synchronized (%)

None

LUN1,2

TC/GAD

Pair Synchronized (%)

None

Entire Storage System

TC/TCz/GAD

Pair Synchronized (%)

None

Note
  1. Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.
  2. When two mirrors exist for each LDEV or LUN in a GAD configuration, the mirror information on the P-VOL side is output. If both the mirrors are P-VOLs, the information about the mirror in the COPY or PSUS/PSUE status is output. If both the mirrors are in the PSUS/PSUE status, the information about the mirror with the smaller mirror ID is output.

Differential Track

Function

The synchronization rate between P-VOL and S-VOL is shown through the number of differential tracks (the number of tracks not transmitted from P-VOL to S-VOL) for TrueCopy, TrueCopy for Mainframe, and global-active device pairs.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Logical Device1,2

TC/TCz/GAD

Differential track (count)

None

LUN1,2

TC/GAD

Differential track (count)

None

Entire Storage System

TC/TCz/GAD

Differential track (count)

None

Note
  1. Volumes that do not accept I/O from the host, such as pool-VOLs, are not monitored.
  2. When two mirrors exist for each LDEV or LUN in a GAD configuration, the mirror information on the P-VOL side is output. If both the mirrors are P-VOLs, the information about the mirror in the COPY or PSUS/PSUE status is output. If both the mirrors are in the PSUS/PSUE status, the information about the mirror with the smaller mirror ID is output.

Number of Journals

Function

The total number of journals transferred from the master journal volume to the restore journal volume is shown.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Journal

UR/URz

Master Journal

Journal (count/sec)

Restore Journal

Journal (count/sec)

Entire Storage System

UR/URz

Master Journal

Journal (count/sec)

Restore Journal

Journal (count/sec)

Data Usage Rate

Function

The current journal data usage rate (%) is shown, with the journal volume data space assumed to be 100%.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Journal

UR/URz

Master Journal

Data Usage Rate (%)

Restore Journal

Data Usage Rate (%)

Metadata Usage Rate

Function

The metadata usage rate of the current journal is shown, with journal volume metadata space assumed to be 100%.

Storing period

Short-Range can be specified.

(VSP E series) Sample Interval can be specified from 1 to 15 minutes.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Journal

UR/URz

Master Journal

Metadata Usage Rate (%)

Restore Journal

Metadata Usage Rate (%)

Detailed information of resources on top 20 usage rates

Function

You can view resources of the 20 most-used MP units. The system puts in order of use 20 MP units based on rates collected during the most recent usage period. You cannot specify a particular period.

Storing period

Only the Short-Range real time monitoring data can be supported.

Selection of monitoring objects

Select the desired monitoring objects in the Performance Objects field.

Item on left side of Object field

Item on right side of Object field

Item on left side of Monitor Data field

Item on right side of Monitor Data field

Controller

MP

Usage Rate (%)

None

Viewing MP unit resource details

To view the resources assigned to an individual MP unit, click the link to the name of the MP unit in the right panel of the Monitor window. The MP Properties window lists the 20 most-used resources by blade name.