Skip to main content
Hitachi Vantara Knowledge

Requirements for Dynamic Tiering

System requirements for provisioning

The system requirements for provisioning include basic hardware and licensing requirements as well as additional requirements for shared memory and cache management devices.

  • The storage system hardware and firmware must be configured and ready for use.
  • The parity groups in the storage system must be configured and ready for use.
  • Hitachi Device Manager - Storage Navigator must be configured and ready for use. For details and instructions, see the System Administrator Guide for your storage system.
  • The license keys for the provisioning software products must be enabled. For details and instructions, see the System Administrator Guide for your storage system.
  • The required amount of shared memory for your operational environment must be installed in the storage system.
  • The required number of cache management devices must be available.
  • The desired system option modes (SOMs) must be enabled on your storage system before you begin operations. For information about SOMs, contact customer support.

License requirements

Before you use Dynamic Provisioning, the Dynamic Provisioning must be installed on the storage system.

Before you use the capacity saving function, Dynamic Provisioning and dedupe and compression must be installed on the storage system.

Before you use Dynamic Tiering, Dynamic Provisioning and Dynamic Tiering must be installed on the storage system.

You need the Dynamic Tiering license to access the total capacity of the pool with the tier function enabled.

For Dynamic Provisioning, Dynamic Tiering, and active flash, the same license capacity as the DP-VOLs is required.

For Dynamic Tiering, and active flash, the same license capacity as the pool capacity is required.

For active flash, the same license capacity as the pool capacity is required.

Before you use active flash, the Dynamic Provisioning, and Dynamic Tiering software must be installed on the storage system.

If the DP-VOLs of Dynamic Provisioning or Dynamic Tiering are used for the primary volumes and secondary volumes of ShadowImage, TrueCopy, Universal Replicator, Volume Migration, global-active device, or Thin Image, you will need the ShadowImage, TrueCopy, Universal Replicator, Volume Migration, global-active device, and Thin Image licenses for the total pool capacity in use.

If you expand a Dynamic Provisioning pool that contains Thin Image pairs and snapshot data, the licensed capacity for both Dynamic Provisioning and Thin Image is required.

If you exceed the licensed capacity, you will be able to use the additional unlicensed capacity for 30 days. For more information about temporary license capacity, see the System Administrator Guide.

Shared memory requirements

The amount of additional shared memory needed depends on the size of the Dynamic Provisioning, Thin Image and Dynamic Tiering pools.

Shared memory is installed and removed by your service representative. For details about the installation and removal of shared memory, see the hardware reference guide for your storage system.

Caution Before shared memory is removed, all Dynamic Provisioning, Dynamic Tiering, and active flash pools must be deleted.

Cache management device requirements

Cache management devices are used to manage the cache associated with volumes (LDEVs). Each volume (LDEV) requires at least one cache management device. An LDEV that is not a DP-VOL requires one cache management device. For an LDEV that is a DP-VOL, you need to calculate the number of cache management devices required.

The storage system can manage 65,280 cache management devices:

The View Management Resource Usage window in Device Manager - Storage Navigator displays the number of cache management devices in use and the maximum number of cache management devices. To open the View Management Resource Usage window, click Actions and then select View Management Resource Usage.

Calculating the number of cache management devices required for DP-VOLs

A volume that is not a DP-VOL requires one cache management device. The number of cache management devices that a DP-VOL requires depends on the capacity of the V-VOL (capacity of the user area) and the maximum capacity of the cache management device.

The following table explains the relationship between the pool volume attribute and the maximum capacity of the cache management device.

Maximum capacity of cache management device

Pool attribute of V-VOL

MB

(TB)

Blocks

Internal volume

3,145,548 MB

(2.99 TB)

6,442,082,304 blocks

External volume

3,145,548 MB

(2.99 TB)

6,442,082,304 blocks

Use the following formula to calculate the number of cache management devices that a DP-VOL requires. In this formula, the user-specified capacity is the user area capacity of a V-VOL.

ceiling(user-specified-capacity / max-capacity-of-cache-management-device)

where

  • ceiling: The value enclosed in ceiling( ) must be rounded up to the nearest whole number.
Note
  • For a DP-VOL with the deduplication or compression function enabled, use twice the number of the cache management devices calculated by this formula.

Pool specifications and requirements

A pool is a set of volumes reserved for storing Dynamic Provisioning write data.

Items

Requirements

Pool capacity

Calculate pool capacity using the following formula:

Capacity of the pool (MB) = total-number-of-pages * 42 - 4200.

4200 in the formula is the management area size of the pool-VOL with System Area.

Total number of pages = Σ(floor(floor(pool-VOL number of blocks / 512) / 168)) for each pool-VOL.

floor( ): Truncates the value calculated from the formula in parentheses after the decimal point (that is, round down to nearest whole number).

Following are minimum and maximum capacity sizes for one pool:

  • VSP G350 and VSP F350: From 3.9GB to 4.0PB
  • VSP G370 and VSP F370: From 3.9GB to 4.0PB
  • VSP G700 and VSP F700: From 3.9GB to 4.0PB
  • VSP G900 and VSP F900: From 3.9GB to 4.0PB

Following are maximum capacity sizes of all pools in a storage storage system:

  • VSP G350 and VSP F350: 4.4PB
  • VSP G370 and VSP F370: 8.0PB
  • VSP G700 and VSP F700: 12.5PB
  • VSP G900 and VSP F900: 16.6PB

If you operate a pool without monitoring the free space, then ensure that the total DP-VOLs capacity remains smaller than the pool capacity.

Max number of pool-VOLs

From 1 to 1,024 volumes (per pool).

A volume can be registered as a pool-VOL to one pool only.

Maximum number of pools

  • VSP G350, VSP G370, VSP G700, VSP F350, VSP F370, VSP F700: Up to a total of 64 pools per storage system. Pool numbers (0 to 63) are assigned as pool identifiers.
  • VSP G900 and VSP F900: Up to a total of 128 pools per storage system. Pool numbers (0 to 127) are assigned as pool identifiers.

The maximum number of pools includes the following pool types:

  • Dynamic Provisioning (including Dynamic Tiering and active flash)
  • Thin Image

Increasing capacity

You can increase pool capacity dynamically. Best practice is to add pool-VOLs to increase capacity by one or more parity groups.

Reducing capacity

You can reduce pool capacity by removing pool-VOLs.

Deleting

You can delete pools that are not associated with any DP-VOLs or with any Thin Image pairs or Thin Image snapshot data.

Thresholds

  • Warning Threshold: You can set the value between 1% and 100%, in 1% increments. The default is 70%.
  • Depletion Threshold: You can set the value between the warning threshold and 100%, in 1% increments. The default is 80%.
  • Thresholds cannot be defined for a pool with data direct mapping enabled.

Data allocation unit

42 MB

The 42-MB page corresponds to a 42-MB continuous area of the DP-VOL. Pages are allocated for the pool volumes only when data has been written to the area of the DP-VOL.

Tier

(Dynamic Tiering and active flash)

Defined based on the media type (see Drive type for a Dynamic Tiering and active flash tier, below). Maximum 3 tiers.

Maximum capacity of each tier

(Dynamic Tiering and active flash)

  • Virtual Storage Platform VSP G350, VSP G370, VSP G700, VSP G900, VSP F350, VSP F370, VSP F700, VSP F900: 4.0 PB (Total capacity of the tiers must be within 4.0 PB and the shared memory must be installed).

For a pool which is associated with DP-VOLs that are enabled of the capacity saving function, the following table indicates the maximum capacity of the total pool volumes and the number of pools that can be created.

Storage systemAdded shared memoriesMaximum capacity of the total pool volumes for one pool (PB)Number of pools that can be created
Virtual Storage Platform G350 or Virtual Storage Platform F350Base0.293
Extension 11.616
Extension 24.445
Extension 3Not availableNot available
Virtual Storage Platform G370 or Virtual Storage Platform F370Base1.616
Extension 14.445
Extension 28.0564
Extension 3Not availableNot available
Virtual Storage Platform G700 or Virtual Storage Platform F700Base1.616
Extension 14.445
Extension 28.0564
Extension 312.564
Virtual Storage Platform G900 or Virtual Storage Platform F900Base4.445
Extension 18.0584
Extension 212.5128
Extension 316.6128

Pool-VOL requirements

Pool-VOLs make up a DP pool.

Item

Requirements

Volume type

Logical volume (LDEV)

While pool-VOLs can coexist with other volumes in the same parity group, for best performance:

  • Pool-VOLs for a pool should not share a parity group with other volumes.
  • Pool-VOLs should not be located on concatenated parity groups.

Pool-VOLs cannot be used for any other purpose. For instance, you cannot specify the following volumes as Dynamic Provisioning, Dynamic Tiering, and active flash pool-VOLs:

  • Volumes used by ShadowImage, Volume Migration, TrueCopy, global-active device, or Universal Replicator
  • Volumes already registered in Thin Image, Dynamic Provisioning, Dynamic Provisioning or active flash pools
  • Volumes used as Thin Image P-VOLs or S-VOLs
  • Volumes reserved by Data Retention Utility
  • Data Retention Utility volumes with a Protect, Read Only, or S-VOL Disable attribute
  • LDEVs whose status is other than Normal, Correction Access, or Copying. You cannot specify volumes in blocked status or volumes in copying process.
  • Command devices
  • Quorum disks used by global-active device

The following volume cannot be specified as a pool-VOL for Dynamic Tiering:

  • An external volume with the data direct mapping attribute enabled.

If pool-VOLs are LDEVs created from the parity group with accelerated compression enabled, these pool-VOLs must be applied to one pool.

Emulation type

OPEN-V

RAID level for a Dynamic Provisioning pool

You can use one of the following RAID levels:

  • RAID 1 (2D+2D, or concatenated 2 of 2D+2D)
  • RAID 5 (3D+1P, 4D+1P, 6D+1P, 7D+1P, concatenated 2 of 7D+1P, or concatenated 4 of 7D+1P)
  • RAID 6 (6D+2P, 12D+2P, or 14D+2P)

Pool-VOLs of RAID 5, RAID 6, RAID 1, and external volumes can coexist in the same pool. For pool-VOLs in the same pool:

  • RAID 6 is the recommended RAID level for pool-VOLs, especially for a pool where the recovery time of a pool failure due to a drive failure is not acceptable.
  • Pool-VOLs of the same drive type with different RAID levels can coexist in the same pool. We recommend that you set one RAID level for pool-VOLs. If you register pool-VOLs with multiple RAID levels to the same pool, the I/O performance depends on the RAID levels of pool-VOLs to be registered. In that case, note the I/O performance of the drives.

RAID level for a Dynamic Tiering or active flash pool

You can use one of the following RAID levels:

  • RAID 1 (2D+2D, or concatenated 2 of 2D+2D)
  • RAID 5 (3D+1P, 4D+1P, 6D+1P, 7D+1P, concatenated 2 of 7D+1P, or concatenated 4 of 7D+1P)
  • RAID 6 (6D+2P, 12D+2P, or 14D+2P)

Pool-VOLs of RAID 5, RAID 6, RAID 1, and external volumes can coexist in the same pool. For pool-VOLs in the same pool:

  • RAID 6 is the recommended RAID level for pool-VOLs, especially for a pool where the recovery time of a pool failure due to a drive failure is not acceptable.
  • Pool-VOLs of the same drive type with different RAID levels can coexist in the same pool. We recommend that you set one RAID level for pool-VOLs. If you register pool-VOLs with multiple RAID levels to the same pool, the I/O performance depends on the RAID levels of pool-VOLs to be registered. In that case, note the I/O performance of the drives.
  • Because the speed of RAID 6 is slower than other RAID levels, tiers that use other RAID levels should not be placed under a tier that uses RAID 6.

Data drive type for a Dynamic Provisioning pool

SSD, FMD DC2, SAS15K, SAS10K, SAS7.2K, and external volumes can be used as the data drive type. These data drive types can coexist in the same pool.

Cautions:

  • Best practice is for drives of different types not to coexist in the same pool. If multiple pool-VOLs with different drive types are registered in the same pool, the I/O performance depends on the drive type of the pool-VOL to which the page is assigned. Therefore, if different drive types are registered in the same pool, ensure that the required I/O performance is not degraded by using less desirable drive types.
  • If multiple data drives coexist in the same pool, we recommend not using data drives that are the same types and different capacities.

Data drive type for a Dynamic Tiering or active flash pool

SAS15K, SAS10K, SAS7.2K, SSD, FMD DC2, and external volumes can be used as the data drive type. These data drive types can coexist in the same pool. If active flash is used, SSDs must be installed in advance.

Caution: If multiple data drives coexist in the same pool, we recommend not using data drives that are the same types and different capacity sizes.

Volume capacity

Internal volume: From 8 GB to 2.9 TB

External volume: From 8 GB to 4 TB

External volume with the data direct mapping attribute: From 8 GB to 256 TB

LDEV format

The LDEV format operation can be performed on pool-VOLs only when all of the following conditions are satisfied:

  • There are no DP-VOLs defined for the pool, or all DP-VOLs defined for the pool are blocked.
  • The pool does not contain any Thin Image pairs or snapshot data.

Path definition

You cannot specify a volume with a path definition as a pool-VOL.

DP-VOL requirements

Items

Requirements

Volume type

DP-VOL (V-VOL)

The LDEV number is handled in the same way as for normal volumes.

Maximum number of DP-VOLs

For VSP G350 and VSP F350:

  • Up to 16,383 per pool (For a pool with data direct mapping enabled, up to 16,383 per pool).
  • Up to 16,375 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 16,383 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)
  • Up to 16,383 per system (the total number of DP-VOLs and external volumes must be 16,383 or less).
  • Up to 16,375 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 16,383 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)

For VSP G370 and VSP F370:

  • Up to 32,767 per pool (For a pool with data direct mapping enabled, up to 32,767 per pool).
  • Up to 32,623 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 32,639 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)
  • Up to 32,767 per system (the total number of DP-VOLs and external volumes must be 32,767 or less).
  • Up to 32,623 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 32,639 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)

For VSP G700 and VSP F700:

  • Up to 49,151 per pool (For a pool with data direct mapping enabled, up to 49,151 per pool).
  • Up to 32,623 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 32,639 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)
  • Up to 49,151 per system (the total number of DP-VOLs and external volumes must be 49,151 or less).
  • Up to 32,623 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 32,639 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)

For VSP G900 and VSP F900:

  • Up to 63,232 per pool (For a pool with data direct mapping enabled, up to 63,232 per pool).
  • Up to 32,623 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 32,639 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)
  • Up to 63,232 per system (the total number of DP-VOLs and external volumes must be 49,151 or less).
  • Up to 32,623 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Deduplication and Compression)
  • Up to 32,639 DP-VOLs per pool (For DP-VOLs whose Capacity Saving is Compression)

Volume capacity

Total maximum volume capacity is as follows:

  • For VSP G350 and VSP F350: 4.4 PB (the shared memory must be installed)
  • For VSP G370 and VSP F370: 8.0 PB (the shared memory must be installed)
  • For VSP G700 and VSP F700: 12.5 PB (the shared memory must be installed)
  • For VSP G900 and VSP F900: 16.6 PB (the shared memory must be installed)

However, if DP-VOLs of the capacity saving function are enabled and are associated with a pool, then the maximun volume capacity is the value which is deducted from the total capacity (10% of the total capacity of DP-VOLs before the capacity saving).

Path definition

Available.

LDEV format

Available. Quick Format is not available.

System option mode (SOM) 867 ON: When you format an LDEV on a DP-VOL, the capacity mapped to the DP-VOL is released to the pool as free space.

When you format a DP-VOL, the storage system releases the allocated page area in the DP-VOL. The quick format operation cannot be performed. If the LDEV format is applied to V-VOLs that are enabled for full allocation, the used capacity of the pool is not changed before the LDEV format is applied.

Caution:

  • For a DP-VOL with deduplication and compression enabled, a deduplication system data volume whose capacity saving status is Failed cannot be formatted.

The following table indicates the maximum capacity of the total DP-VOLs whose Capacity Saving setting is Deduplication and Compression or Compression.

Storage systemAdded shared memoriesMaximum capacity of the total DP-VOLwhose Capacity Saving setting is Deduplication and Compression or Compression (PB)
Virtual Storage Platform G350 or Virtual Storage Platform F350Base0.2175
Extension 11.2
Extension 23.3
Extension 3Not available
Virtual Storage Platform G370 or Virtual Storage Platform F370Base1.2
Extension 13.3
Extension 26.0375
Extension 3Not available
Virtual Storage Platform G700 or Virtual Storage Platform F700Base1.2
Extension 13.3
Extension 26.0375
Extension 39.375
Virtual Storage Platform G900 or Virtual Storage Platform F900Base3.3
Extension 16.375
Extension 29.375
Extension 312.45

Operating system and file system capacity

When initializing a DP-VOL operating systems and file systems will consume some Dynamic Provisioning pool space. Some combinations initially take up little pool space, while other combinations take as much pool space as the virtual capacity of the DP-VOL.

The following table shows the effects of some combinations of operating system and file system capacity. For more information, contact your service representative.

OS

File System

Metadata Writing

Pool Capacity Consumed

Windows Server 2003, Windows Server 2008*

NTFS

Writes metadata to first block.

Effective reduction of pool capacity: Small (one page)

If file update is repeated, allocated capacity increases when files are updated (overwritten). Therefore, the effectiveness of reducing the pool capacity consumption decreases.

Linux

XFS

Writes metadata in Allocation Group Size intervals.

Effective reduction of pool capacity: Depends upon allocation group size. The amount of pool space consumed will be approximately [DP-VOL Size] × [42 MB/Allocation Group Size]

Ext2

Ext3

Writes metadata in 128-MB increments.

Effective reduction of pool capacity: About 33% of the size of the DP-VOL.

The default block size for these file systems is 4 KB. This results in 33% of the DP-VOL acquiring DP pool pages. If the file system block size is changed to 2 KB or less then the DP-VOL Page consumption becomes 100%.

Solaris

UFS

Writes metadata in 52-MB increments.

No effective reduction of pool capacity.

Size of DP-VOL.

VxFS

Writes metadata to the first block.

Effective reduction of pool capacity: Small (one page).

AIX

JFS

Writes metadata in 8-MB increments.

No effective reduction of pool capacity.

Size of DP-VOL.

If you change the Allocation Group Size settings when you create the file system, the metadata can be written to a maximum interval of 64 MB. Approximately 65% of the pool is used at the higher group size setting.

JFS2

Writes metadata to the first block.

Effective reduction of pool capacity: Small (one page).

VxFS

Writes metadata to the first block.

Effective reduction of pool capacity: Small (one page).

HP-UX

JFS (VxFs)

Writes metadata to the first block.

Effective reduction of pool capacity: Small (one page).

HFS

Writes metadata in 10-MB increments.

No effective reduction of pool capacity.

Size of DP-VOL.

* In a Windows environment, both Normal Format and Quick Format are commonly used. In this environment, Quick Format consumes less thin provisioning pool capacities than Normal Format:

  • On Windows Server 2008, using Normal Format issues Write commands to the overall volume (for example, overall "D" drive). When Write commands are issued, pages corresponding to the overall volume are allocated, so pool capacities corresponding to the ones of the overall volume are consumed. In this case, the thin provisioning advantage of reducing capacities is lost.
  • Quick Format issues Write commands only to management information (for example, index information). Therefore, pages corresponding to the management information areas are allocated, but the capacities are smaller than the ones consumed by Normal Format.

V-VOL page reservation requirement

The V-VOL full allocation is performed in a range less than the depletion threshold size of the pool. If the capacity of V-VOLs is larger than the depletion threshold size, the full allocation operation is rejected.

CautionThe page reservation function is not supported by the following pools. To prevent data writing from being disabled due to pool overflow, you must monitor the free area of these pools frequently.
  • Pools that contain pool volumes belonging to a parity group with accelerated compression enabled
  • Pools with capacity saving enabled

Use the following formula to calculate the reserved page capacity for each pool. In the formula, the value enclosed in ceiling( ) must be rounded up to the nearest whole number.

Reserved capacity for each pool [block] =
ceiling(CV-capacity-of-V-VOL [block] / 86016) * 86016 + ceiling(CV-capacity-of-V-VOL [block] /
6442082304) * 4 * 86016 - used-capacity-of-V-VOL [block]

Tier relocation rules, restrictions, and guidelines

Rules
  • Performance monitoring, using both Auto and Manual execution modes, observes the pages that were allocated to DP-VOLs prior to the start of the monitoring cycle and the new pages allocated during the monitoring cycle. Pages that are not allocated during performance monitoring are not candidates for tier relocation.
  • Tier relocation can be performed concurrently on up to eight pools. If more than eight pools are specified, relocation of the ninth pool starts after relocation of any of the first eight pools has completed.
  • If Auto execution mode is specified, performance monitoring may stop about one minute before to one minute after the beginning of the next monitor cycle start time.
  • The amount of relocation varies per cycle. In some cases, the cycle may end before all relocation can be handled. If tier relocation does not finish completely within the cycle, relocation to appropriate pages is executed in the next cycle.
  • Calculating the tier range values will be influenced by the capacity allocated to DP-VOLs with relocation disabled and the buffer reserve percentages.
  • While a pool-VOL is being deleted, tier relocation is not performed. After the pool-VOL deletion is completed, tier relocation starts.
  • Frequency distribution is unavailable when there is no data provided by performance monitoring.
  • While the frequency distribution graph is being created or the tier range values are being calculated, the frequency distribution graph is not available. The time required for determining the tier range values varies depending on the number of DP-VOLs and total capacity. The maximum time is about 20 minutes.
  • To balance the usage levels of all parity groups, rebalancing may be performed after several tier relocation operations. If rebalancing is in progress, the next cycle of tier relocation might be delayed.
Performance monitoring or tier relocation conditions

The following table lists monitoring and execution conditions and specifies the data collection status, fixed monitoring status, and tier relocation operations for each condition. The latest fixed monitoring information is referenced when tiers are relocated.

Monitoring information or execution conditions

Status of data collection in progress

Status of fixed monitoring information used in tier relocation

Tier relocation operations

Solutions

Unallocated pages.

Pages are not monitored.

No monitoring information about pages.

Tiers of the pages are not relocated.

Unnecessary. After the pages are allocated, monitoring and relocation are performed automatically.

Zero data is discarded during data monitoring.

Monitoring on pages is reset.

Only monitoring information about pages is invalid.

Tiers of the pages are not relocated.

Unnecessary. After the pages are allocated, monitoring and relocation are performed automatically.

V-VOL settings do not allow tier relocation.

Volume is monitored.

Monitoring information about the volume is valid.

If the tier relocation setting is being disabled at the performance monitoring finish time, tiers of the volume are not relocated.

N/A

When V-VOLs are deleted.

Volume is not monitored.

Only monitoring information about the volume is invalid.

Tier relocation of the volume is suspended.

N/A

When execution mode is changed to Manual from Auto or vice versa.

Suspended.

Monitoring information collected before suspension is valid.

Suspended.

Collect the monitoring information again if necessary.1

When the power switch is power ON or OFF.

Monitoring is suspended by powering OFF and is not resumed even after powering ON.1

Monitoring information collected during the previous cycle is continuously valid.

Tier relocation is suspended by powering OFF and is resumed after powering ON.

Collect the monitoring information again if necessary.1

  • When Volume Migration is performed.
  • When Quick Restore of ShadowImage is performed.

The monitoring information of the volume is not collected at the present moment. In the next monitoring period, the monitoring information will be collected.

Monitoring information is invalid and the volumes need to be monitored.

Tier relocation to volumes is suspended.

Collect the monitoring information again if necessary.1

S-VOL of the following products when the initial copy operation is performed:

  • TrueCopy
  • Global-active device
  • Universal Replicator

Monitoring information is collected continuously, but the monitoring of the volumes is reset.2

No effect on the fixed monitoring information. The monitoring information collected during the previous cycle continues to be valid.

Tier relocation to volumes is suspended.

Collect the monitoring information again if necessary.1

  • When the number of tiers increases by adding pool-VOLs.
  • When the pool-VOLs of the tiers are switched by adding pool-VOLs.3
  • When tier rank of the external LDEV is changed.

Continued.

Fixed monitoring information is invalid because the monitoring information was discarded. If monitoring is set to the continuous mode, weighted data calculated by using the monitoring information in past periods is also discarded.

Suspended.

Relocate tiers again.1

When pool-VOLs are deleted.

Continued.

Monitoring information is invalid temporarily. The monitoring information is calculated again after deleting of pool-VOLs.4

Deleting the pool-VOL stops the tier relocation. The process resumes after the pool-VOL is deleted.

N/A

When cache is blocked.

Continued.

No effect on the fixed monitoring information. The monitoring information collected during the previous cycle continues to be valid.

Suspended.5

After recovering the faulty area, relocate tiers again.1

When an LDEV is blocked (pool-VOL or V-VOL).

Continued.

No effect on the fixed monitoring information. The monitoring information collected during the previous cycle continues to be valid.

Suspended.5

After recovering the faulty area, relocate tiers again.1

When the depletion threshold of the pool is nearly exceeded during relocation.

Continued.

No effect on the fixed monitoring information. The monitoring information collected during the previous cycle continues to be valid.

Suspended.5

Add pool-VOLs, then collect monitoring information and relocate tiers again.1

When execution mode is Auto and the execution cycle ends during tier relocation.

At the end time of execution cycle, data monitoring stops.

The monitoring information collected before monitoring performance stops is valid.

Suspended.5

Unnecessary. The relocation is performed automatically in the next cycle.

When execution mode is Manual and 7 days elapse after monitoring starts.

Suspended.

The monitoring information collected before suspension is valid.

Continued.

Collect the monitoring information again if necessary.1

Notes:

  1. The execution mode is Auto or the script is written in manual execution mode, information is monitored again, and tiers are relocated automatically.
  2. All pages of the S-VOLs are not allocated, and the monitoring information of the volume is reset. After the page is allocated to the new page, the monitoring information is collected.
  3. Example: Pool-VOLs of SAS15K are added to the following Configuration 1:
    • Configuration 1 (before change): Tier 1 is SSD, Tier 2 is SAS10K, and Tier 3 is SAS7.2K.
    • Configuration 2 (after change): Tier 1 is SSD, Tier 2 is SAS15K, and Tier 3 is SAS10K and SAS7.2K.
  4. The monitoring information status is changed from invalid (INV) to calculating (PND). After completion of calculating, the monitor information status changes from calculating (PND) to valid (VAL).
  5. The SIM code 641xxx is displayed if "Notify an alert when tier relocation is suspended by system" is enabled on the "Edit Advanced System Settings" window.

 

  • Was this article helpful?