Skip to main content
Hitachi Vantara Knowledge

About Dynamic Tiering

 

When you use the multi-tier pool feature of Dynamic Tiering, usage is monitored and data is relocated to different storage tiers to optimize performance.

Dynamic Tiering

Hitachi Dynamic Tiering (HDT) simplifies storage administration by automatically optimizing data placement in 1, 2 or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and use of HDT can lower capital costs. Simplified and unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.

When you use Dynamic Provisioning to implement a thin provisioning strategy, the array has all the elements in place to offer automatic self-optimizing storage tiers provided by Hitachi Dynamic Tiering (HDT). Dynamic Tiering enables you to configure a storage system with multiple storage tiers consisting of different types of data drives (for example, SSD, SAS) to improve the speed and cost of performance. Dynamic Tiering extends and improves the functionality and value of the Dynamic Provisioning feature. Both features use pools of physical storage against which virtual disk capacity, or V-VOLs, is defined. Each thin provisioning pool can be configured to operate either as a DP pool or as a Dynamic Tiering pool. Dynamic Tiering is supported only on VSP Gx00 models.

Automated tiering of physical storage is the ability of the array to dynamically monitor usage and relocate data to the appropriate storage tier based on performance requirements. Data relocation focuses on data segments rather than on entire volumes. The Dynamic Tiering functionality is entirely within the array and does not require any host level involvement.

Dynamic Tiering enables you to:

  • Configure physical storage into tiers based on drive performance. Host volumes are configured as usual from a common pool, but the pool consists of multiple types of drives that offer different levels of performance (for example, high-speed SSDs and lower-speed SAS).
  • Automatically migrate data to the most suitable tier according to access frequency. Data that is accessed frequently is placed on the high-performance drives, while data that is accessed infrequently is placed on the lower-performance drives.

Dynamic Tiering simplifies storage administration by automating and eliminating the complexities of efficiently using tiered storage. It automatically moves data on pages in Dynamic Provisioning virtual volumes to the most appropriate storage media, according to workload, to maximize service levels and minimize total cost of storage.

Dynamic Tiering gives you:

  • Improved storage resource usage
  • Improved return on high-cost storage tiers
  • Reduced storage management effort
  • More automation
  • Nondisruptive storage management
  • Reduced costs
  • Improved overall performance

About tiered storage

In a tiered storage environment, storage tiers can be configured to accommodate different categories of data. A tier is a group of storage media (pool volumes) in a DP pool. Tiers are determined by a single storage media type. A storage tier can be one type of data drive, including SSD, SAS, or external volumes. Media of high-speed performance make up the upper tiers. Media of low-speed response become the lower tiers. Up to a maximum of three tiers can coexist in each Dynamic Tiering pool.

Categories of data may be based on levels of protection needed, performance requirements, frequency of use, and other considerations. Using different types of storage tiers helps reduce storage costs and improve performance.

Because assigning data to particular media may be an ongoing and complex activity, Dynamic Tiering software automatically manages the process based on user-defined policies.

As an example of the additional implementation of tiered storage, tier 1 data (such as mission-critical or recently accessed data) might be stored on expensive and high-quality media such as double-parity RAIDs (redundant arrays of independent disks). Tier 2 data (such as financial or seldom-used data) might be stored on less expensive storage media.

Multi-tier pool

With Dynamic Tiering, you can enable the Multi-Tier pool option for an existing pool. The default is to allow tier relocation for each DP-VOL. Only the DP-VOLs for which tier relocation is enabled are subject to calculation of the tier range value, and tier relocation will be performed on them. If tier relocation is disabled for all DP-VOLs in a pool, tier relocation is not performed.

The following figure illustrates the relationship between multi-tier pool and tier relocation.

GUID-2227C29B-9E67-486A-98C6-382AA226BF01-low.png
Example of adding a tier

If the added pool-VOL is a different media type, then a new tier is created in the pool. The tier is added to the appropriate position according to its performance. The following figure illustrates the process of adding a tier.

GUID-2D8715E0-3CEC-4F43-AA4F-5C719C4EBE8D-low.gif

Example of deleting a tier

If a tier no longer has any pool-VOLs when you delete them, the tier is deleted from the pool. The following figure illustrates deleting a tier.

GUID-51DD4BB0-0F4C-4FD1-83CE-65A23CFBBF03-low.gif

How the tier relocation process works

The term tier relocation refers to the process of determining the appropriate storage tier and migrating the pages to that tier. The following figure shows the tier relocation process.

GUID-74478A6C-A13E-4594-AC75-C1395589F812-low.gif

Explanation of the tier relocation process:

  1. Allocate pages and map them to DP-VOLs

    Pages are allocated and mapped to DP-VOLs on an on-demand basis. Page allocation occurs when a write is performed to an area of any DP-VOL that does not already have a page mapped to that location. Normally, a free page is selected for allocation from an upper tier with a free page. If the capacity of the upper tier is insufficient for the allocation, the pages are allocated to the nearest lower tier. A DP-VOL set to a tier policy is assigned a new page that is based on the tier policy setting. The relative tier for new page allocations can be specified during operations to create and edit LDEVs. If the capacity of all the tiers is insufficient, an error message is sent to the host.

  2. Gather I/O load information of each page

    Performance monitoring gathers monitoring information of each page in a pool to determine the physical I/O load per page in a pool. I/Os associated with page relocation, however, are not counted.

  3. Create frequency distribution graph

    The frequency distribution graph, which shows the relationship between I/O counts (I/O load) and capacity (total number of pages), is created.

    You can use the View Tier Properties window to view this graph. The vertical scale of the graph indicates ranges of I/Os per hour and the horizontal scale indicates a capacity that received the I/O level. Note that the horizontal scale is accumulative.

    CautionWhen the number of I/Os is counted, the number of I/Os satisfied by cache hits are not counted. Therefore, the number of I/Os counted by Performance Monitoring is different from the number of I/Os from the host. The number of I/Os per hour is shown in the graph. If the monitoring time is less than an hour, the number of I/Os shown in the graph might be higher than the actual number of I/Os.

    Monitoring mode settings of Period or Continuous influences the values shown on the performance graph. Period mode will report the most recent completed monitor cycle I/O data on the performance graph. Continuous mode will report a weighted average of I/O data that uses recent monitor cycle data, along with historical data on the performance graph.

  4. Determine the tier range values

    The page is allocated to the appropriate tier according to performance monitoring information. The tier is determined as follows.

    1. Determine the tier boundary

      The tier range value of a tier is calculated using the frequency distribution graph. This acts as a boundary value that separates tiers.

      The pages of higher I/O load are allocated to the upper tier in sequence. Tier range is defined as the lowest I/Os per hour (IOPH) value at which the total number of stored pages matches the capacity of the target tier (less some buffer percentage) or the IOPH value that will reach the maximum I/O load that the tier should process. The maximum I/O load that should be targeted to a tier is the limit performance value, and the rate of I/O to the limit performance value of a tier is called the performance utilization percent. A performance utilization of 100% indicates that the target I/O load to a tier is beyond the forecasted limit performance value.

      CautionThe limit performance value is proportional to the capacity of the pool volumes used in the tier. The total capacity of the parity group should be used for a pool to further improve the limit performance.
    2. Determine the tier delta values

      The tier range values are set as the lower limit boundary of each tier. The delta values are set above and below the tier boundaries (+10 to 20%) to prevent pages from being migrated unnecessarily. If all pages subject to tier relocation can be contained in the upper tier, both the tier range value (lower limit) and the delta value will be zero.

      GUID-31AF6E08-B17B-4C7B-944F-F3398F31D094-low.png
    3. Determine the target tier of a page for relocation.

      The IOPH recorded for the page is compared against the tier range value to determine the tier to which the page moves.

  5. Migrate the pages

    The pages are moved to the appropriate tier. After migration, the page usage rates are averaged out in all tiers. I/Os that occur in the page migration are not monitored.

Tier monitoring and relocation cycles

Performance monitoring and tier relocation can be set to execute in one of two execution modes: Auto and Manual. You can set up execution modes, or switch between modes by using either Hitachi Device Manager - Storage Navigator or Command Control Interface.

In Auto execution mode, monitoring and relocation are continuous and automatically scheduled. In Manual execution mode, the following operations are initiated manually.

  • Start monitoring
  • Stop monitoring and recalculate tier range values
  • Start relocation
  • Stop relocation

In both execution modes, relocation of data is automatically determined based on monitoring results. The settings for these execution modes can be changed nondisruptively while the pool is in use.

Auto execution mode

Auto execution mode performs monitoring and tier relocation based on information collected by monitoring at a specified constant frequency: every 0.5, 1, 2, 4, or 8 hours. All auto execution mode cycle frequencies have a starting point at midnight (00:00). For example, if you select a 1 hour monitoring period, the starting times would be 00:00, 01:00, 02:00, 03:00, and so on.

As shown in the following table, the 24-hour monitoring cycle allows you to specify the times of day to start and stop performance monitoring. The 24-hour monitoring cycle does not have to start at midnight. Tier relocation begins at the end of each cycle.

Monitoring cycle (hours)

Start Times

Finish Times

0.5

0.5 hours from 00:00 AM. For example 00:00, 00:30, and 01:00

0.5 hours after the start time

1

1 hour from 00:00 AM. For example 00:00, 01:00, and 02:00

1 hour after the start time

2

2 hours from 00:00 AM. For example 00:00, 02:00, and 04:00

2 hours after the start time

4

4 hours from 00:00 AM. For example 00:00, 04:00, and 08:00

4 hours after the start time

8

8 hours from 00:00 AM. For example 00:00, 08:00, and 16:00

8 hours after the start time

24 (monitoring time period can be specified)

Specified time

Specified time

If the setting of the monitoring cycle is changed, performance monitoring begins at the new start time. The collection of monitoring information and tier relocation operations already in progress are not interrupted when the setting is changed.

Example 1: If the monitoring cycle is changed from 1 hour to 4 hours at 01:30 AM, the collection of monitoring information and tier relocation in progress at 01:30 AM continues. At 02:00 AM and 03:00 AM, however, monitoring information is not collected and tier relocation is not performed. From 04:00 AM, the collection of monitoring information and tier relocation operations are started again. These operations are then performed at 4-hour intervals.

Example 2: If the monitoring cycle is changed from 4 hours to 1 hour at 01:30 AM, the collection of monitoring information and tier relocation in progress at 01:30 AM continues. From 04:00 AM, the collection of monitoring information and tier relocation operations are started again. These operations are then performed at 1-hour intervals.

GUID-D6019F75-9F9C-4628-A8D7-39EDE31B1989-low.png

In auto execution mode, the collection of monitoring data and tier relocation operations are performed in parallel in the next cycle. Data from these parallel processes are stored in two separate fields.

  • Data while monitoring is in progress in the next cycle.
  • Fixed monitoring information used in the tier relocation.

Manual execution mode

You can start and stop performance monitoring and tier relocation at any time. You should keep the duration of performance monitoring to less than 7 days (168 hours). If performance monitoring exceeds 7 days, then monitoring stops automatically.

Manual execution mode starts and ends monitoring and relocation at the time the command is issued. You can use scripts, which provide flexibility to control monitoring and relocation tasks based on a schedule for each day of the week.

In manual execution mode, the next monitoring cycle can be started with the collection of monitoring data and tier relocation operations performed in parallel. Data from these parallel processes are stored in two separate fields.

  • Data while monitoring is in progress in the next cycle.
  • Fixed monitoring information used in the tier relocation.

The following figure illustrates the collection of monitoring data to tier relocation workflow in manual execution mode.

GUID-D90E26C3-E328-4767-A413-2CF28BD112AC-low.png

Case 1: If the second collection of the monitoring information is finished during the first tier relocation, the latest monitoring information is the second collection. In that case, the first collection of monitoring information is referenced only after the first tier relocation has completed.

GUID-69ECA429-F5F9-442A-BC5C-3C6FC159BEF9-low.png

Case 2: When tier relocation is performed with the first collection of monitoring information, the second collection of monitoring information can be performed. However, the third collection cannot be started. Because only two fields are used store collected monitoring information, the third collection cannot be overwritten.

In that case, the third collection of the monitoring information is started after the first tier relocation is stopped or tier relocation has completed.

The collection of the monitoring information is not started under these conditions as well:

  • When the second tier relocation is performed, the fourth collection of monitoring information cannot be started.
  • When the third tier relocation is performed, the fifth collection of monitoring information cannot be started.

If such conditions exist, two cycles of monitoring information cannot be collected continuously while tier relocation is performed.

The following figure illustrates the third collection of monitoring information while tier relocation is performed.

GUID-2FF80EF0-9CCD-4178-8F64-A7289B5D4B14-low.png

Execution modes when using Hitachi Device Manager - Storage Navigator

Dynamic Tiering performs tier relocations using one of two execution modes: Auto and Manual. You can switch between modes by using Hitachi Device Manager - Storage Navigator.

Auto execution mode

In Auto execution mode, the system automatically and periodically collects monitoring data and performs tier relocation. You can select an Auto execution cycle of 0.5, 1, 2, 4, or 8 hours, or a specified time.

The following illustrates tier relocation processing in a 2-hour Auto execution mode:

GUID-D6019F75-9F9C-4628-A8D7-39EDE31B1989-low.png
Manual execution mode

In Manual execution mode, you can manually collect monitoring data and relocate a tier. You can issue commands to manually:

  1. Start monitoring.
  2. Stop monitoring.
  3. Perform tier relocation.

The following illustrates tier relocation processing in Manual execution mode:

GUID-D90E26C3-E328-4767-A413-2CF28BD112AC-low.png
Notes on performing monitoring
  • You can collect the monitoring data even while performing the relocation.
  • After stopping the monitoring, the tier range is automatically calculated.
  • The latest available monitoring information, which is collected just before the relocation is performed, is used for the relocation processing.
  • When the relocation is performed, the status of the monitor information must be valid.

Execution modes when using Command Control Interface

 
Manual execution mode

In Manual execution mode, you can manually collect monitoring data and relocate a tier. You can execute commands to do the following:

  1. Start monitoring.
  2. Stop monitoring.
  3. Perform tier relocation.

The following illustrates tier relocation processing when in Manual execution mode:

GUID-D90E26C3-E328-4767-A413-2CF28BD112AC-low.png
Notes on performing monitoring
  • You can collect the monitoring data even while performing the relocation.
  • After stopping the monitoring, the tier range is automatically calculated.
  • The latest available monitoring information, which is collected just before the relocation is performed, is used for the relocation processing.
  • When the relocation is performed, the status of the monitor information must be valid.

Buffer area of a tier

Dynamic Tiering uses buffer percentages to reserve pages for new page assignments and allow the tier relocation process. Areas necessary for processing these operations are distributed corresponding to settings used by Dynamic Tiering. The following describes how processing takes place to handle the buffer percentages.

Buffer space: The following table shows the default rates (rate to capacity of a tier) of buffer space used for tier relocation and new page assignments, listed by drive type.

Drive type

buffer area for tier relocation

buffer area for new page assignment

Total

SSD

2%

0%

2%

Non-SSD

2%

8%

10%

New page assignment: New pages are assigned based on a number of optional settings. Pages are then assigned to the next lower tier, leaving a buffer area (2% per tier by default) for tier relocation. After 98% of capacity of all tiers is assigned, the remaining 2% of the buffer space is assigned from the upper tier. The buffer space for tier relocation is 2% in all tiers.

The following illustrates the workflow of a new page assignment.

buffer area for tier relocation 1 of 2

For a pool comprised of pool volumes from parity groups with accelerated compression enabled, the capacity of the parity group equivalent to 20% of the FMD tier is used as the compression buffer area. When free space other than the FMD tier is not available, pages are assigned to this buffer area just before the capacity depletes.

buffer area for tier relocation 2 of 2

Relocation speed

Relocation speed: The page relocation speed can be set to 1(Slowest), 2(Slower), 3(Standard), 4(Faster), and 5(Fastest). The default is 3(Standard). If you want to perform tier relocation at high speed, use the 5(Fastest) setting. If you set a speed that is slower than 3(Standard), the load to data drives is low when tier relocation is performed.

Based on the number of the parity groups that constitute a pool, this function adjusts the number of V-VOLs for which tier relocation can be performed at one time. Tier relocation can be performed on as many as 32 V-VOLs in a storage system at once.

After changing the setting, the relocation speed does not change and the data drive load may not change in the following cases:

  • The number of parity groups is very few.
  • The number of V-VOLs associated with the pool is very few.
  • Tier relocations are being performed on the multiple pools.

Setting external volumes for each tier

If you use external volumes as pool-VOLs, you can put the external volumes in tiers by setting the External LDEV Tier Rank for the external volumes. The External LDEV Tier Rank consists of the following three types: High, Middle, and Low. The following examples describe how tiers may be configured:

Example 1: Configuring tiers by using external volumes only

Tier 1: External volumes (High)

Tier 2: External volumes (Middle)

Tier 3: External volumes (Low)

Example 2: Configuring tiers by combining internal volumes and external volumes

Tier 1: Internal volumes (SSD)

Tier 2: External volumes (High)

Tier 3: External volumes (Low)

You can set the External LDEV Tier Rank when creating the pool, changing the pool capacity, or setting the Edit External LDEV Tier Rank window. The following table explains the performance priority (from the top) of data drives.

Priority

Data drive type

1

SSD

2

SAS 15K rpm

3

SAS 10K rpm

4

SAS 7.2K rpm

5

External volume* (High)

6

External volume* (Middle)

7

External volume* (Low)

*Displays as External Storage in the Drive Type/RPM.

Reserved pages for relocation operation: A small percentage of pages, normally 2, are reserved per tier to allow relocation to operate. These are the buffer spaces for tier relocation.

Tier relocation workflow: Tier relocation is performed taking advantage of the buffer space allocated for tier relocation, as mentioned previously. Tier relocation is also performed to secure the space reserved in each tier for new page assignment. The area is called the buffer space for new page assignments. When tier relocation is performed, Dynamic Tiering reserves buffer spaces for relocation and new page assignment.

During relocation, a tier may temporarily be assigned over 98% of capacity, or well under the allowance for the buffer areas.

Rebalancing the usage level among parity groups

If multiple parity groups that contain LDEVs used as pool volumes exist, rebalancing can improve biased usage rates in parity groups. Rebalancing is performed as if each parity group were a single pool volume. After rebalancing, the usage rates of LDEVs in a parity group may not be balanced, but the usage rate in the entire pool is balanced.

The usage level among parity groups is automatically rebalanced when these operations are in progress:

NoteIn pools comprised of pool volumes assigned by parity groups with accelerated compression enabled, the rebalancing operation is performed with consideration of the parity group's used capacity. Therefore, after performing the rebalancing operation, the capacity of the pool volume may not be reduced.
  • Expanding pool capacity
  • Shrinking pool capacity
  • Reclaiming zero pages
  • Reclaiming zero pages in a page release request issued by the host with the Write Same command, for example.
  • Performing tier relocations

If you expand the pool capacity, Dynamic Provisioning moves data to the added space on a per-page basis. When the data is moved, the usage rate among parity groups of the pool volumes is rebalanced.

Host I/O performance may decrease when data is moved. If you do not want to have the usage level of parity groups automatically balanced, call the customer support.

You can see the rebalancing progress of the usage level among parity groups in the View Pool Management Status window. Dynamic Provisioning automatically stops balancing the usage levels among parity groups if the cache memory is not redundant or the pool usage rate reaches up to the threshold.

Functions overview for active flash and Dynamic Tiering

Tier management is performed by both active flash and Dynamic Tiering. The differences in supported functionality are included in the table below.

Category

Functions

active flash

Dynamic Tiering

Initial page allocation

Assigning new pages to the write data of the host

Supported

Supported

Monitoring of performance

Monitoring tiers based on the specified cycle time

Supported

N/A

Tier relocation

Promoting pages to the tier which is determined by the scheduled performance monitoring

Supported

Supported

Promoting pages from the tier 2 or 3 to tier 1, the pages where the latest access frequency is suddenly high

Supported

N/A

To maintain capacity in the tier 1, demoting pages from the tier 1 to tier 2 or 3, the pages where the latest access frequency is low

Supported

N/A

Dynamic Tiering workflow

The following illustration shows the workflow for setting up Dynamic Tiering on the storage system.

As shown in the illustration, Hitachi Device Manager - Storage Navigator and Command Control Interface (CCI) have different workflows. This document describes how to set up Dynamic Tiering using Hitachi Device Manager - Storage Navigator . For details about how to set up Dynamic Tiering using CCI, see the Command Control Interface Command Reference and Command Control Interface User and Reference Guide. Use Hitachi Device Manager - Storage Navigator to create pools and DP-VOLs.

GUID-D9B36C5D-F89F-4689-9F2E-C40C7931E648-low.png

*Notes:

  1. When you create a pool using CCI, you cannot enable the multi-tier pool option or register multiple media as pool-VOLs. Before making tiers, enable the multi-tier pool option.
  2. Enabling the multi-tier pool option from CCI automatically sets Tier Management to Manual. You must use Hitachi Device Manager - Storage Navigator to change Tier Management to Auto.
CautionWhen you delete a pool, its pool-VOLs (LDEVs) are blocked, and you must format the blocked LDEVs before using them.

User interface specifications for Dynamic Tiering tasks

The following tables list the Dynamic Tiering tasks and indicate whether the tasks can be performed using Device Manager - Storage Navigator or CCI or both.

Table 1: Tasks and parameter settings

Task

GUI

CCI

DP pool

Create

(Setting item)

Create

Yes

Yes

Pool Name

Yes

Yes

Threshold

Yes

Yes

Multi-Tier Pool: Enable/Disable

Yes

No1

active flash: Enable/Disable Yes No1

Tier Management: Auto mode

Yes

No

Tier Management: Manual mode

Yes

No

Rate of space for new page assignment

Yes3

No

Buffer Space for Tier relocation

Yes

No

Cycle Time

Yes

No

Monitoring Period

Yes

No

Monitoring Mode

Yes

No

External LDEV Tier Rank

Yes

No

Relocation speed

Yes

No

Delete

Yes

Yes

Change Settings

(Setting item)

Change Settings

Yes

Yes

Pool Name

Yes

Yes2

Threshold

Yes

Yes

Multi-Tier Pool: Enable/Disable

Yes

Yes

active flash: Enable/Disable

Yes

Yes

Tier Management: Auto to Manual

Yes

Yes

Tier Management: Manual to Auto

Yes

No

Buffer Space for New page assignment

Yes3

Yes3

Buffer Space for Tier relocation

Yes

Yes

Cycle Time

Yes

No

Monitoring Period

Yes

No

Monitoring Mode

Yes

Yes

External LDEV Tier Rank

Yes

No

Relocation speed

Yes

No

DP pool

Add pool-VOLs

Yes

Yes

Delete pool-VOLs

Yes

Yes

Restore Pools

Yes

Yes

Monitoring start/end

Yes

Yes

Tier relocation start/stop

Yes

Yes

DP-VOL

Create

(Setting item)

Create

Yes

Yes

DP-VOL Name

Yes

Yes

Multi-Tier Pool relocation: Disable

No

No

Tiering Policy

Yes

No

New page assignment tier

Yes

No

Relocation priority

Yes

No

Expand

Yes

Yes

Reclaim zero pages

Yes

Yes

Delete

Yes

Yes

Change Settings

(Setting item)

Change Settings

Yes

Yes

Tier relocation: Enable/Disable

Yes

Yes

Tiering Policy

Yes

Yes

New page assignment tier

Yes

Yes

Relocation priority

Yes

No

Relocation log

Download relocation log

Yes

No

Notes:

  1. Set to Disable if the pool is created by Command Control Interface.

    Command Control Interface cannot be used to create Dynamic Tiering pools initially. You can use the raidcom modify pool command to modify Dynamic Provisioning pools for use as Dynamic Tiering or active flash pools.

  2. You can rename a pool when adding pool-VOLs to it.
  3. Recommendation is to specify 0% for SSD and 8% for other drives.
Table 2: Display items: Setting parameters

No.

Category

Output information

GUI

Command Control Interface

1

DP pool

Multi-Tier Pool: Disable

Yes

Yes

2

active flash: Enable/Disable

Yes

Yes

3

Tier Management mode: Auto/Manual

Yes

Yes

4

Rate of space for new page assignment

Yes

Yes

5

Cycle Time

Yes*

No

6

Monitoring Period

Yes*

No

7

Monitoring Mode

Yes

Yes

8

External LDEV Tier Rank

Yes

No

9

Relocation speed

Yes

No

10

DP-VOL

Tier relocation: Enable/Disable

Yes

Yes

11

Tiering Policy

Yes

Yes

12

New page assignment tier

Yes

Yes

13

Relocation priority

Yes

No

*You can view this item only in the Auto execution mode.

Table 3: Display items: Capacity usage for each tier

No.

Category

Output information

GUI

Command Control Interface

1

DP pool

Capacity for each tier (Total)

Yes

Yes

2

Capacity for each tier (Usage)

Yes

Yes

3

DP-VOL

Capacity for each tier (Usage)

Yes

Yes

Table 4: Display items: Performance monitor statistics

No.

Category

Output information

GUI

Command Control Interface

1

DP pool

Frequency distribution

Yes1

No

2

Tier range

Yes1

Yes2

3

Performance utilization

Yes

Yes

4

Monitoring Period starting time

Yes

No

5

Monitoring Period ending time

Yes

No

6

DP-VOL

Frequency distribution

Yes

No

7

Tier range

Yes

No

8

Monitoring Period starting time

Yes

No

9

Monitoring Period ending time

Yes

No

Notes:

  1. You can select either each level of the tiering policy or the entire pool. If you set other than All(0), the tier range is not displayed when you select the entire pool.
  2. The tier range when the tiering policy All(0) is selected is displayed.
Table 5: Display items: Operation status of performance monitor/relocation

No.

Category

Output information

GUI

Command Control Interface

1

DP pool

Monitor operation status: Stopped/Operating

Yes

Yes

2

Performance monitor information: Valid/Invalid/Calculating

Yes

Yes

3

Relocation status: Relocating/Stopped

Yes

Yes

4

Relocation progress: 0 to 100%

Yes

Yes