HCP services
HCP services are responsible for optimizing the use of system resources and maintaining the integrity and availability of the stored data. Each of the services — Protection, Content Verification, Fast Object Recovery, Scavenging, Shredding, Compression/Encryption, Duplicate Elimination, Disposition, Garbage Collection, Capacity Balancing, Storage Tiering, Migration, Replication, Geo-distributed Erasure Coding, and Replication Verification — performs a specific function that contributes to the overall health and viability of the system.
Services generally run without user intervention either according to a schedule or in response to certain events.
In the HCP System Management Console, you can set the service schedule and control certain aspects of some services. You can also monitor the progress of the Shredding, Duplicate Elimination, and Replication services and review the use of primary spindown storage and extended storage. Additionally, you use the Console to configure and manage data migrations and replication.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
About services
A service is a background process that performs a specific function that contributes to the continuous tuning of the HCP system. HCP implements fifteen services.
Services work on the repository as a whole; that is, they work across all namespaces.
In general, services run only while they are enabled. The exception is the Protection service, which runs in response to certain triggers even while it’s disabled. Typically, services are disabled only by authorized HCP service providers during problem resolution.
The System Management Console shows the status of most services on the Overview page. HCP records information about service runs and irreparable violations in the system log.
For information about the Overview page, see About the Overview page. For information about the system log, see Understanding the HCP system log.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Service types
HCP implements these services:
•Protection service: Ensures that damaged or lost objects can be recovered. For more information, see Protection service.
•Content Verification service: Ensures that object data is not corrupted. For more information, see Content Verification service.
•Fast Object Recovery service: Ensures that unavailable objects have their status changed to available once they are recovered. For more information, see Fast Object Recovery serviceFast Object Recovery service
•Scavenging service: Ensures that the metadata for each object exists and is not corrupted. For more information, see Scavenging service.
•Shredding service : Shreds deleted objects that are marked for shredding. For more information, see Shredding service.
•Compression/Encryption service: Compresses object data to make more efficient use of HCP storage. For more information, see Compression/Encryption service.
•Duplicate Elimination service: Merges duplicate data to free space in the HCP storage. You can monitor the activity of this service. For more information, see Duplicate Elimination service.
•Disposition service: Automatically deletes expired objects. For more information, see Disposition service.
•Garbage Collection service: Deletes data and metadata left in the repository by incomplete operations, thereby freeing space for the storage of additional objects. For more information, see Garbage collection service.
•Capacity Balancing service: Ensures that the percentage of space used remains roughly equivalent across the storage nodes in the HCP system. For more information, see Capacity Balancing service.
•Storage Tiering service: Moves objects among storage tiers, creates and deletes copies of objects on various storage tiers to ensure that each tier contains the correct number of copies of each object, and changes objects to metadata-only according to rules in service plans. For more information about this service, see Storage Tiering service. For information about service plans, see Working with service plans.
•Network Per Storage Component service: Increases tiering performance from the HCP system to HCP S Series Nodes or external storage devices by isolating their communication to an individual forward-facing HCP network. Each HCP S Series Node or external device can use its own network to communicate with HCP. For more information, see Migration service.
•Migration service: Migrates data off selected nodes in an HCP RAIN system or selected storage arrays in an HCP SAIN system in preparation for retiring those devices. For more information, see Migration service.
•Replication service: Maintains selected tenants and namespaces on two or more HCP systems and manages the objects in the selected namespaces across those systems to ensure data availability and enable disaster recovery. You can configure, monitor, and control the activity of this service. For more information, see
•The Geo-distributed Erasure Coding service erasure codes full copies of object data in replicated namespaces that allow erasure coding. For more information, see
•The Replication Verification service replicates objects that the Replication service missed replicating or was unable to replicate. For more information, see
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Service precedence
Some services take precedence over others:
•On any given node, the Protection service takes precedence over the Content Verification and Compression/Encryption services. If either of these services is running when the Protection service starts, the service that was running stops. When the Protection service stops, each service that stopped automatically restarts, provided that the service is scheduled to run at that time.
•On any given node, the Capacity Balancing service takes precedence over the Scavenging service. If the Scavenging service is running when the Capacity Balancing service starts, the Scavenging service stops. When the Capacity Balancing service stops, the Scavenging service automatically restarts, providing that it is scheduled to run at that time.
•On any given node, the Migration service takes precedence over the Capacity Balancing service. If the Capacity Balancing service is running when the Migration service starts, the Capacity Balancing service stops. It does not restart automatically when the Migration service stops.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Metadata storage
To fully understand how certain services work, you need to know how HCP manages metadata. When you add an object, upload part of a multipart object, or add a directory or symbolic link to a namespace:
1.HCP creates primary metadata for the item being added. This metadata consists of information HCP already knows, such as the creation date, and, for objects and parts only, the data size, hash algorithm, and cryptographic hash value generated by that algorithm. It also includes metadata that was either inherited or specified in the write request, such as retention setting, UID, and GID.
2.HCP creates a second copy of the primary metadata. HCP then distributes both copies of the primary metadata among the HCP general nodes.
3.For objects and parts of objects:
oHCP creates the number of copies of the object data or part data required to satisfy the ingest tier data protection level (DPL) in the service plan associated with the namespace. If the ingest tier is primary running storage, HCP distributes all copies of the data among the HCP storage nodes. If the ingest tier is S Series storage, HCP writes all copies of the data to that storage.
Each copy of the primary metadata for the object or part points to all copies of the data for that object or part. However, object or part data in primary running storage is not necessarily stored on the same nodes as the primary metadata for the object or part.
oIn primary running storage, HCP stores the number of copies of the object or part metadata required to satisfy the ingest tier metadata protection level (MPL) in the service plan. These copies, called secondary metadata, let HCP reconstruct the primary metadata should that become necessary.
If the ingest tier is primary running storage, the MPL is the same as the DPL.
oHCP stores MPL copies of any custom metadata for the object in primary running storage. For multipart objects, HCP stores MPL copies of the custom metadata for the object as a whole instead of MPL copies for each part.
oIf the ingest tier is S Series storage, HCP stores one copy of the secondary metadata along with each copy of the object data on that tier.
The figure below shows the data and primary and secondary metadata that result from storing an object in a namespace with a service plan that sets both the DPL and MPL for the ingest tier to 2. The figure assumes that the ingest tier is primary running storage.
When an object or part is moved from the ingest tier to another storage tier, HCP stores one copy of the secondary metadata along with each copy of the object or part data on that tier. Regardless of which tier the object or part data is on, HCP always keeps in primary running storage the number of copies of the secondary metadata and custom metadata required to satisfy the MPL for that tier.
For more information about metadata, see Using a Namespace or Using the Default Namespace.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Service scheduling
The Protection, Content Verification, Scavenging, Compression/Encryption, Duplicate Elimination, Disposition, Garbage Collection, and Storage Tiering services run according to a weekly schedule. The schedule controls when during the week each service runs and the performance level at which it runs. The performance level determines the load the service puts on the system.
You can create multiple service schedules and, at any time, change the one that’s active. For example, you could create two schedules — one that puts a very light load on the system and one that puts a heavier load. During periods of high system usage, you could activate the first schedule. During periods of low system usage, you could activate the second schedule.
For more information about scheduling the Protection, Content Verification, Scavenging, Compression/Encryption, Duplicate Elimination, Disposition, Garbage Collection, and Storage Tiering services, see Scheduling services.
The Replication service also runs according to a schedule, but you manage this schedule separately from the schedule for the other services. For information about scheduling the Replication service, see Replicating Tenants and Namespaces.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Protection service
The Protection service ensures the stability of the repository by maintaining a specified level of data redundancy, called the data protection level (DPL), for each object in the repository throughout the entire object lifecycle. The DPL for an object is the number of copies of the object data that HCP must maintain.
For the purpose of data protection, HCP treats these as individual objects:
•Parts of multipart objects
•Parts of in-progress multipart uploads
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
Each namespace has a service plan that defines both a storage tiering strategy and a data protection strategy for the objects in that namespace. For all objects in a given namespace, the storage tiering strategy defines one or more types of storage as tiers. The data protection strategy specifies the DPL that’s applied to the objects that are stored on each tier.
At any given point in the lifecycle of an object, the data protection strategy specifies the number of copies of the object that must exist in the HCP repository and the storage tier on which each copy must be stored.
HCP initially stores all object data in either primary running storage or S Series storage and all metadata on primary running storage. Therefore, the service plan for a namespace must always define either primary running storage or S Series storage as the initial storage tier, called the ingest tier, and must specify both the data protection level and the metadata protection level (MPL) for the that tier.
For each object in a given namespace, the ingest tier DPL is the number of copies of the object data that HCP must maintain on primary running storage or S Series storage, as applicable, from the time the object is first stored in the repository until the time the object data is moved to another storage tier. The ingest tier MPL is the number of copies of the object metadata that HCP must maintain on primary running storage for as long as the object exists in the repository.
On SAIN and VM systems, by default, the ingest tier DPL and MPL are both set to one. On RAIN systems, by default, the ingest tier DPL and MPL are both set to two. At any time, you can modify the service plan for a namespace to set the ingest tier DPL and MPL for that namespace.
For any given namespace, you can assign a service plan that will give the namespace a DPL setting of one (supported on SAIN and VM systems only), two, three, or four. You can also set the ingest tier MPL to one, two, three, or four. However, the ingest tier MPL for a namespace must be equal to the ingest tier DPL for that namespace.
HCP uses the Protection service to maintain the correct number of copies of each object in the HCP repository. When the number of existing copies of an object goes below the number of object copies specified in the applicable service plan (for example, because of a logical volume failure), the Protection service automatically creates a new copy of that object in another location. When the number of existing copies of an object goes above the number of object copies specified in the applicable service plan, the Protection service automatically deletes all unnecessary copies of that object.
The Protection service runs according to the active service schedule and in response to certain events. For information about service schedules, see Scheduling services. For information about the events that cause the Protection service to run, see Protection service triggers.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Ingest tier data protection level
Each namespace has a service plan that defines one or more storage tiers for that namespace and specifies the data protection level (DPL) that’s applied to the objects that are stored on each tier.
![]() |
Note: For the purpose of DPL, HCP treats parts of multipart objects, parts of multipart uploads, chunks for erasure-coded objects, and chunks for erasure-coded parts of multipart objects as individual objects. |
Every service plan defines primary running storage or S Series storage as the initial storage tier, called the ingest tier, and specifies a DPL setting and an MPL setting for that tier.
For each object in a given namespace, the ingest tier DPL is the number of copies of the object data that HCP must maintain on primary running storage or S Series storage, as applicable, from the time the object is first stored in the repository until the time the object data is moved to one or more other storage tiers (if multiple storage tiers are defined for the namespace). The ingest tier MPL is the number of copies of the object metadata that HCP must maintain on primary running storage for as long as the object exists in the repository.
In the default namespace, each directory also has an ingest tier DPL setting. This setting is the same as the ingest tier DPL setting that’s specified in the service plan that’s assigned to the default namespace.
The ingest tier DPL for a namespace affects the amount of storage that’s used when data is added to that namespace. With an ingest tier DPL of 1, HCP creates only one copy of the object data on primary running storage or S Series storage, as applicable. With an ingest tier DPL of 2, HCP creates two copies, thereby using twice as much storage.
For both objects and directories, the ingest tier DPL setting is stored as metadata. Users and applications can see, but not modify, this metadata. For information about viewing ingest tier DPL settings, see Using a Namespace or Using the Default Namespace.
![]() |
Note: When the ingest tier DPL of a namespace changes, for each object in that namespace that’s stored on the ingest tier, HCP creates or deletes copies of the object data, as needed to satisfy the new ingest tier DPL. This can take some time, during which some objects have the old required number of copies and some have the new. When viewing object metadata, however, users and applications always see the intended number of copies (that is, the ingest tier DPL specified in the service plan for the namespace). |
Protection sets
HCP groups storage nodes into protection sets with the same number of nodes in each set. To improve reliability in the case of multiple component failures, HCP tries to store all the copies of the data for an object that exist on primary running storage or primary spindown storage on nodes in a single protection set. Each copy is stored on a logical volume associated with a different node.
HCP creates protection sets for each possible ingest tier DPL setting that can be specified in a service plan. For example, if an HCP system has six nodes, it creates three groups of protection sets:
•One group of six protection sets with one node in each set (for DPL 1)
•One group of three protection sets with two nodes in each set (for DPL 2)
•One group of two protection sets with three nodes in each set (for DPL 3)
For each object in a given namespace, to store copies of the object data on primary running storage, HCP uses the group of protection sets that corresponds to the ingest tier DPL setting that’s specified in the service plan for the namespace. To store copies of the object data on primary spindown storage (if it’s used), HCP uses the group of protection sets that corresponds to the primary spindown storage tier DPL setting.
The nodes in a protection set are not necessarily all associated with the same amount of storage. If the total number of storage nodes in the system is not evenly divisible by a DPL setting, HCP can use the storage associated with the extra nodes as standby storage. At any time, HCP can add standby storage to any existing protection set that requires additional storage to balance available storage capacity among its nodes.
The Protection Service is responsible for checking and repairing protection sets. If a node in a protection set fails and the system includes an extra node, the service creates a new protection set that includes all the healthy nodes in the original protection set and the extra node.
![]() |
Note: Regardless of whether HCP uses the storage associated with a node that’s not in a protection set, the node itself runs all the HCP software and performs all the same functions as the nodes in protection sets. |
Data availability
When HCP needs to maintain multiple copies of the data for an object on primary running storage or on primary spindown storage, HCP stores each copy of the object data on storage that’s managed by a different node. All but one of these copies can become unavailable without affecting access to the object.
Copies of object data become unavailable on primary running storage or primary spindown storage when HCP detects an improperly functioning logical volume or corrupted or missing data. Copies of the object data also become unavailable if the nodes that provide access to those copies become unavailable. A data outage occurs when all the nodes that provide access to all the copies of the data for an object fail.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Protection service processing
The Protection service has two main functions: detecting protection violations and repairing those violations.
Detecting protection violations
To detect protection violations, the Protection service checks that for each object in a given namespace, at any given point in the object lifecycle:
•The total number of existing copies of object data is equal to the total number of copies of object data that are currently required to exist on all of the storage tiers defined for the namespace by its service plan
•If copies of the object data are stored on primary running storage or primary spindown storage:
oEach copy of the object data is stored on a different node
oAll copies of the object data are stored in the same protection set
oEach copy of the object data is accessible
A violation occurs when any one of these conditions is not true.
Repairing protection violations
The Protection service can repair certain protection violations for an object, usually by relying on other good copies of the object data stored in the HCP repository.
For each object in a given namespace, at any given point in the object lifecycle:
•If the total number of existing copies of the object data is less than the total required number of copies that’s specified in the namespace service plan (for example, because of a logical volume failure on primary running storage), then on each storage tier that’s defined for the namespace, the Protection service creates the number of copies of the object data that’s required to bring the object into compliance with the namespace service plan.
![]() |
Notes: •If one or more copies of the object data are supposed to be stored on a tier that’s currently inaccessible (for example, due to a failed network connection), but rehydration is enabled for that tier, the Protection service creates an extra copy of the object data on primary running storage. •For objects stored on primary storage, if the repository contains fewer than the required number of copies of the object data for a set of duplicate-eliminated objects, then for each object, the Protection service creates enough additional copies of the object data on primary storage to: oSatisfy the ingest tier DPL and, if applicable, the primary spindown storage tier DPL specified in the service plan for the namespace that contains the object oComply with the protection set requirements for the applicable ingest tier and primary spindown storage tier DPL settings The Duplicate Elimination service then merges the object data again the next time it runs. For information about the Duplicate Elimination service, see Duplicate Elimination service. |
•If the total number of existing copies of the object data is greater than the total required number of copies that’s specified in the namespace service plan, then the Protection service deletes the correct number of copies of the object data from each storage tier in order to bring the object into compliance with the namespace service plan.
![]() |
Note: An object can have an extra copy of its data if the object was rehydrated after a read from primary spindown storage (if it’s used) or from any extended storage tier that’s defined for the namespace that contains the object. Copies of objects on primary running storage that are supposed to be metadata-only can have data if they were rehydrated after a read from a remote system. The Protection service marks rehydrated object data for deletion only after the rehydration keep time has expired and only if another copy of the data exists. |
•On primary storage, if two copies of the data for an object are stored on the same node, the Protection service creates a new copy on a different node and marks the extra one in the first location for deletion.
•On primary running storage, primary spindown storage, or NFS storage, if a logical volume has a copy of the secondary metadata for an object but no copy of the object data with that metadata, the Protection service creates a replacement copy of the object data on that volume.
If replication is in effect and the Protection service cannot find a copy of the object data on the current system, it can repair the object by using a copy from another HCP system in the replication topology.
To repair a chunk for an erasure-coded object, the Protection service recalculates the chunk either by using a full copy of the object data, if one exists on another system in the replication topology, or by using the chunks for the object on all the other systems in the replication topology.
For an explanation of secondary metadata, see Metadata storage.
•For an object that’s stored on primary running storage or primary spindown storage, if fewer than the required number of copies of the object data are accessible on the nodes in a protection set, the Protection service first tries to increase the number of copies stored on those nodes. If the Protection service cannot create all the required copies of the object data on the nodes in the protection set (for example, because a node is unavailable), the service tries to put the required number of copies on the nodes in a different protection set. If the service cannot put all required copies of the object data on nodes in the same protection set, the service stores the copies on different nodes in different protection sets.
Unavailable and irreparable objects
When the Protection service cannot repair a violation, it marks the object as either unavailable or irreparable:
•An object is unavailable if all of these are true:
oAt least one copy of the object data is unavailable due to a node, logical volume, or extended storage device being unavailable.
oNone of the available copies of the object data are good.
oEither the namespace that contains the object is not being replicated, or all copies of the object data on other systems in the replication topology are either inaccessible or not good.
•An object is irreparable if all of these are true:
oAll of the primary storage volumes, NFS volumes, and extended storage devices on which copies of the object data are stored are available.
oNone of the copies of the object data are good.
oEither the namespace that contains the object is not being replicated, or all copies of the object data on other systems in the replication topology are either inaccessible or not good.
For information about when the Protection service marks an erasure-coded object as unavailable or irreparable, see Unavailable and irreparable erasure-coded objects.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Protection service triggers
In addition to running according to the service schedule, the Protection service runs in response to certain events. In these cases, the service does a full run (that is, it examines every object in the repository regardless of the schedule and regardless of whether the object data is stored on primary running storage, primary spindown storage, or extended storage).
Events that trigger a Protection service run are:
•Node shutdown — When a node becomes unavailable, HCP triggers the Protection service after waiting 90 minutes to ensure that the node is not just temporarily unavailable.
•Logical-volume failure — When HCP determines that a local logical volume is broken, it triggers the Protection service after waiting one minute to ensure that the volume is not just temporarily unavailable.
•Node removal — When a node is removed from the HCP system, HCP triggers the Protection service after waiting ten minutes to ensure that the node removal is permanent.
![]() |
Note: When the Protection service is disabled, its scheduled runs are canceled. However, the Protection service still runs in response to the triggers listed above unless all of these conditions are true: •None of the tenants or namespaces on the HCP system are being replicated. •All existing service plans are configured to set the ingest tier DPL to (1) one. •If the HCP system is configured to use spindown storage, all existing service plans set the primary spindown storage tier DPL to 1 (one). |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Content Verification service
When an object is created, HCP uses cryptographic hash algorithms to calculate various hash values for it. These values, which are generated based on the object data, system metadata, and custom metadata are stored with the primary metadata for the object.
One of the hash values that’s generated only from the object data is also stored with the secondary metadata for the object. The cryptographic hash algorithm HCP uses to calculate this hash value is namespace dependent. It is set when the namespace is created. Once set, it cannot be changed.
Users and applications can see, but not modify, hash values generated from object data and annotations. They cannot see any other hash values. For information about viewing hash values for objects, see Using a Namespace or Using the Default Namespace.
For the purpose of content verification, HCP treats these as individual objects:
•Parts of multipart objects
•Parts of in-progress multipart uploads
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
The Content Verification service ensures the integrity of each object by:
•Checking that the object data, system metadata, and custom metadata still match the stored cryptographic hash values
![]() |
Note: The Content Verification service does not do a data check for objects: •That are stored in namespaces that use service plans that have S Series storage as the ingest tier. For information about ingest tiers, see Choose the ingest tier. •That are stored on extended storage. For information about extended storage, see Storage for HCP systems. |
•Ensuring that certain secondary metadata other than the hash value matches the primary metadata for the object
The Content Verification service runs according to the active service schedule. For information about service schedules, see Scheduling services.
During HCP content verification, HCP attempts to repair any files that HCP S Series Nodes report as being irreparable.
Cryptographic hash algorithms
HCP supports these cryptographic hash algorithms for selection at the namespace level:
MD5
SHA-1
SHA-256
SHA-384
SHA-512
RIPEMD-160
![]() |
Note: The more complex the hash algorithm, the greater the impact on performance when objects are stored or when services run. |
ETags and the Content Verification service
When an object is stored, HCP generates an ETag for it. An ETag is an identifier for the content of an object.
ETags were introduced in release 6.0 of HCP, so objects stored while the system was at an earlier release do not initially have ETags. When the Content Verification service runs, it generates ETags for objects that do not have them.
In response to an S3 compatible request to retrieve an object that does not yet have an ETag, HCP generates the ETag before returning the object. This can be time consuming for large objects, with the result that read performance is slow for those objects.
If tenant administrators will be enabling the S3 compatible API on namespaces that were created while the HCP system was at a release earlier than 6.0, consider scheduling more run time for the Content Verification service and/or increasing the performance level at which the service runs.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Content Verification service processing
The Content Verification service has two main functions: detecting corrupted data and discrepancies in metadata and repairing that data and metadata.
Detecting content verification violations
To detect corrupted data, the Content Verification service regenerates the cryptographic hash values for each object. After regenerating the hash values, the Content Verification service checks that these regenerated values match the corresponding values in the primary metadata.
The Content Verification service detects metadata discrepancies by checking that certain secondary metadata for each object matches the primary metadata for the object.
A violation occurs when either of the conditions described above is not true. (Violations of the second type are not reported in the system log.)
![]() |
Note: When an object is stored through the CIFS or NFS protocol, its primary metadata does not initially include cryptographic hash values that are based on the object data. HCP waits several minutes to ensure that the object content is complete before calculating these values. Large objects stored through these protocols may take longer to get hash values than smaller objects do. |
For an explanation of primary and secondary metadata, see Metadata storage.
Repairing content verification violations
If the Content Verification service finds a discrepancy between the cryptographic hash values it regenerates for the object and the corresponding hash value in the primary metadata, it creates a new copy of the object from an existing good copy and marks the corrupted copy for deletion.
If replication is in effect and the Content Verification service cannot find a good copy of the object in the current repository, it can repair the object by using a copy from another HCP system in the replication topology
To repair a chunk for an erasure-coded object, the Content Verification service recalculates the chunk either by using a full copy of the object data, if one exists on another system in the replication topology, or by using the chunks for the object on all the other systems in the replication topology.
If the Content Verification service finds a discrepancy between other secondary metadata for the object and the corresponding primary metadata, it uses the primary metadata to replace the secondary metadata.
Unavailable and irreparable objects
When the Content Verification service cannot repair a violation, it marks the object as either unavailable or irreparable:
•An object is unavailable if all of these are true:
oAt least one copy of the object is unavailable due to a node, logical volume, or extended storage device being unavailable.
oNone of the available copies of the object are good.
oEither the namespace that contains the object is not being replicated, or all copies of the object data on other systems in the replication topology are either inaccessible or not good.
•An object is irreparable if all of these are true:
oAll of the primary storage volumes, NFS volumes, and extended storage devices on which copies of the object data are stored are available.
oNone of the copies of the object data are good.
oEither the namespace that contains the object is not being replicated, or all copies of the object data on other systems in the replication topology are either inaccessible or not good.
For information about when the Content Verification service marks an erasure-coded object as unavailable or irreparable, see Unavailable and irreparable erasure-coded objects.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Configuring the Content Verification service
The Content Verification service regenerates cryptographic hash values to detect object corruption. Under certain circumstances, you may want to modify or disable this function to reduce the load on the system:
•In a namespace that’s not being replicated and that has a service plan that sets the ingest tier DPL to 1 (one) and does not define any additional storage tiers, only one copy of each object exists. Therefore, if the Content Verification service discovers a discrepancy in the cryptographic hash values for an object, it cannot repair the object from another copy.
You can choose to have the Content Verification service regenerate hash values only for objects that it could repair if needed. With this option, the service does not regenerate hash values for objects in a namespace if HCP is configured to maintain only one copy of each object in that namespace.
![]() |
Note: Although the service cannot repair corrupt objects in this situation, it can report them. For this reason, if performance is not an issue, you may want to keep hash-value regeneration enabled for all objects. |
•When the load on the system is high, temporarily disabling all hash-value regeneration can provide some relief.
![]() |
Roles: To view the Content Verification page, you need the monitor or administrator role. To configure the Content Verification service, you need the administrator role. |
The Content Verification page in the HCP System Management Console lets you configure the Content Verification service. To display this page, in the top-level menu of the System Management Console, select Services►Content Verification.
To configure the Content Verification service:
1.On the Content Verification page, select the applicable Content Verification Mode option:
oTo configure the Content Verification service to regenerate hash values for all objects stored in the repository, regardless of the number of copies of each object that HCP must maintain in the repository, select Check all objects and repair if needed.
oTo configure the Content Verification service to regenerate hash values for a given object only when HCP is required to maintain multiple copies of that object in the repository, select Check only objects that can be repaired and repair if needed.
oTo completely disable the hash-value regeneration function, select Do not check and repair objects.
2.Click Update Settings.
If you selected the second or third Content Verification Mode option, a confirming message appears.
In the window with the confirming message, select I understand to confirm that you understand the consequences of your action. Then click Update Settings.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Fast Object Recovery service
The Fast Object Recovery service checks unavailable objects and, if it finds that an object is available, changes the object status from unavailable to available. The service runs automatically if the Content Verification service is enabled. If the Content Verification service is disabled, the Fast Object Recovery service does not run.
While the Content Verification service is enabled, the Fast Object Recovery service runs once a day. Additionally, the Fast Object Recovery service runs in response to an availability event, such as an unavailable node becoming available. Although you cannot directly schedule the Fast Object Recovery service, you can effectively suspend running the service by disabling the Content Verification service.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Scavenging service
The Scavenging service ensures that objects in the repository have valid metadata. When the service runs, it verifies that both the primary metadata for each object and the secondary metadata are complete, valid, and in sync with each other.
For the purpose of scavenging, HCP treats these as individual objects:
•Parts of multipart objects
•Parts of in-progress multipart uploads
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
To correct violations it detects, the Scavenging service tries to rebuild or repair the problem metadata:
•If the primary metadata for an object is missing, the service reconstructs it from the secondary metadata. If a user or application changed any of the object metadata between when the violation occurred and the time of its repair, those changes may be overwritten with the previous settings.
•If the primary metadata is missing a pointer to a copy of the object data, the service reconstructs that pointer.
•If the secondary metadata for an object doesn’t match any copies of the primary metadata, the object is considered irreparable, and the service moves it to the .lost+found directory, located under rest, data, or fcfs_data, as applicable,. At this point, you need to determine whether the object needs to be stored again and, if so, ensure that it happens.
You can delete an object from the .lost+found directory only when it’s not under retention. For more information about the .lost+found directory, see Using a Namespace or Using the Default Namespace.
For an explanation of primary and secondary metadata, see Metadata storage.
In the default namespace, the Scavenging service detects and repairs violations in the metadata for directories only if the directory is associated with abandoned data (that is, data no longer associated with any metadata). If the service cannot recover the directory metadata, it rebuilds it from the metadata associated with the parent directory.
The Scavenging service runs according to the active service schedule. For information about service schedules, see Scheduling services.
![]() |
Note: The Scavenging service does not ensure object metadata validity for objects stored in namespaces that use service plans that have S Series storage set as the ingest tier. For information about ingest tiers, see Choose the ingest tier. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Shredding service
Shredding, also called secure deletion, is the process of overwriting the places where all the copies of the data, secondary metadata, and custom metadata for an object were stored in such a way that the object cannot be reconstructed.
The Shredding service shreds deleted objects that are marked for shredding. If the object is a multipart object, the Shredding service shreds each part of the object. The Shredding service also shreds unused parts of multipart uploads that were initiated in namespaces where the default shred setting is true.
The primary metadata for a shredded object is deleted from HCP after all of these events have happened:
•The object is removed from the metadata query engine index, if applicable.
•The object deletion is replicated, if applicable.
•For old versions of objects, the version is pruned or purged.
•The deletion record for the object is deleted from the transaction log. If the Garbage Collection service is configured never to delete deletion records from the transaction log, the primary metadata for the object remains in the system indefinitely.
For information about the transaction log, see Transaction log cleanup.
The shredding policy for each object determines whether that object is shredded. For information about the shredding policy, see Shredding policy.
![]() |
Note: The Shredding service does not shred object data if: •The data is stored in a namespace that uses a service plan that has S Series storage set as the ingest tier. For information about ingest tiers, see Choose the ingest tier. •The data is stored on extended storage. For information about extended storage, see Storage for HCP systems. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Shredding service processing
By default, the Shredding service uses three passes to overwrite the areas where the object data, secondary metadata, and custom metadata were stored. The three passes are applied to the entire object, repeating for each 128-KB block. Each pass has this pattern:
1.Set to a specified value (write the 0xAA pattern to the file)
2.Set to the complement of that value (write the 0x55 (~0xAA) pattern to the file)
3.Set to a random value (write a random value to every byte of the entire file)
4.Verify the value by reading it back
To use a different shredding algorithm, please contact your authorized HCP service provider.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Sending shredding messages to syslog servers
HCP gives you the option of sending a log message for each shredded object or part to the syslog servers specified in the syslog logging configuration. This option takes effect only while syslog logging is enabled and the syslog logging level is set to Notice. The log message for a shredded object or part is sent to the syslog servers only after the primary metadata for the object is deleted.
Object shredding is a namespace-level event. Therefore, messages about shredded objects and parts are sent to the syslog servers only if syslog logging is enabled at the tenant level.
Log messages about shredded objects and parts do not appear in the System or Tenant Management Console regardless of whether those messages are sent to the syslog servers.
For more information about syslog logging, see Configuring syslog logging. For information about enabling syslog logging at the tenant level, see Managing a Tenant and Its Namespaces and Managing the Default Tenant and Namespace.
![]() |
Note: HCP never sends messages about shredded objects and parts to the system log or SNMP managers. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Understanding shredding statistics
The Shredding page in the HCP System Management Console lets you monitor the amount of data waiting to be shredded. It also lets you control various aspects of shredding activity.
![]() |
Roles: To view the Shredding page, you need the monitor or administrator role. To change shredding settings, you need the administrator role. |
To display the Shredding page, in the top-level menu of the System Management Console, select Services ► Shredding.
The Shredding page shows:
•Objects and object parts waiting to be shredded — The total number of these items waiting to be shredded: objects, parts of multipart objects, replaced parts of multipart uploads, parts of aborted multipart uploads, unused parts of completed multipart uploads, and transient parts created during the processing of certain multipart upload operations
•Total bytes to be shredded — The total number of bytes of object and part data and metadata waiting to be shredded
These statistics include all objects and parts marked for shredding for which the primary metadata has not yet been deleted.
The panel also shows the current shredding settings (see Changing shredding settings).
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Changing shredding settings
Depending on the system load, the HCP system can develop a backlog of objects and parts to be shredded. If the system load from other activities is light, you can increase the rate at which shredding occurs. If the load is heavy, you can lower the shredding rate.
To change the settings for the Shredding service:
1.On the Shredding page in the System Management Console, set the options you want:
oTo change the shredding rate, in the Shredding Rate field, select Low, Medium, or High. The higher the shredding rate, the greater the load on the HCP system.
oTo enable or disable sending log messages about shredded objects and parts to syslog servers, select or deselect, respectively, Log shredded objects and object parts to syslog.
2.Click Submit.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Duplicate elimination and shredding
Objects merged by the Duplicate Elimination service do not necessarily have the same shred settings. When merged objects with different shred settings are deleted:
•If the last object deleted is not marked for shredding, the merged data is not shredded.
•If the last object deleted is marked for shredding, the merged data is shredded.
For information about the Duplicate Elimination service, see Duplicate Elimination service. For more information about shred settings, see Shredding policy.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Erasure coding and shredding
For an object that's subject to both erasure coding and shredding:
•Each time a full copy of the data for the object is reduced to a chunk, the full copy must be shredded
•Each time a chunk for the object is restored to a full copy of the object data, the chunk must be shredded
As a result, shredding objects that are subject to erasure coding can put a significant load on all the systems in the replication topology across which the objects are erasure coded.
To minimize the load that the combination of erasure coding and shredding can put on an HCP system, take one of these actions:
•At the system level, do not enable erasure coding as an option for implementing replicaton.
•If you enable erasure coding as the replication method for all cloud-optimized namespaces, tell tenant administrators not to set shredding as the default for deleted objects in cloud-optimized namespaces that are selected for replication.
•If you allow tenant administrators to select erasure coding for their namespaces, tell the adiministrators not to do both of these for any given namespace:
oSet shredding as the default for deleted objects in
oAllow erasure coding
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Shredding service trigger
The Shredding service is event driven only, not scheduled. It is triggered by the deletion of an object that’s marked for shredding. The delete operation can be invoked by a user or application or by the Garbage Collection service.
For information about the Garbage Collection service, see Garbage collection service.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Compression/Encryption service
The Compression/Encryption service compresses object data so as to make more efficient use of HCP storage space. The space reclaimed by compression can be used to store additional objects.
Depending on the types of objects stored, compression can provide a significant benefit. For example, email objects compress very well, thereby saving a lot of space.
The Compression/Encryption service runs according to the active service schedule. For information about service schedules, see Scheduling services.
![]() |
Note: The Compression/Encryption service does not compress object data if: •The data is stored in a namespace that uses a service plan that has S Series storage set as the ingest tier. For information about ingest tiers, see Choose the ingest tier. •The data is stored on extended storage. For information about compression of that data, see Encryption and compression of objects in storage pools. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Compression/Encryption service processing
When the Compression/Encryption service runs, it checks each object that’s eligible for compression. If the object isn’t already compressed, it compresses it. If compressing the object doesn’t reduce its size (for example, because it’s already in a compressed format), the Compression/Encryption service marks it as uncompressible and doesn’t try to compress it again in future runs.
You control which objects are eligible for compression by setting criteria in the System Management Console. For information about this, see Changing compression settings.
In addition to compressing whole objects, the Compression/Encryption service can compress:
•Parts of multipart objects
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
•Full copies of the data for objects and parts that are subject to erasure coding before those copies are reduced to chunks
For the purpose of compression:
•HCP treats parts of multipart objects as individual objects. Eligibility for compression is based on the individual part size, not on the size of the object as a whole.
•HCP treats chunks for erasure-coded objects and chunks for erasure-coded parts of multipart objects as individual objects. However, eligibility for compression is based on the size of the whole object or part before erasure coding.
![]() |
Note: By default, the Compression/Encryption service runs only on primary storage. However, you can configure HCP to run the Compression/Encryption service on extended storage as well. For information about this, see Encryption and compression of objects in storage pools. |
If an object, part, or chunk that was not eligible for compression becomes eligible, the Compression/Encryption service compresses it on its next run. Similarly, if a compressed object, part, or chunk loses its eligibility for compression, the Compression/Encryption service decompresses it on its next run.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Understanding compression statistics
The Compression page in the HCP System Management Console displays statistics about the space saved by the Compression/Encryption service. It also lets you control various aspects of compression activity.
![]() |
Roles: To view the Compression page, you need the monitor or administrator role. To change compression settings, you need the administrator role. |
To display the Compression page, in the top-level menu of the System Management Console, select Services ► Compression.
The Compression page shows:
•Total bytes saved by compression — The current number of bytes of storage freed by compressed objects, object parts, and chunks for objects and object parts
•Percent of storage saved — The amount of storage space currently saved by compression, expressed as a percentage of the total space available for storing objects
•Number of objects and object parts compressed — The total number of these items currently compressed: objects, parts of multipart objects, chunks for erasure-coded objects, and chunks for erasure-coded parts of multipart objects
The panel also shows the current compression settings (see Changing compression settings).
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Changing compression settings
You can control which objects and object parts HCP compresses based on these properties:
•Age — You can compress only objects and parts that were added to the repository more than some number of days ago.
•Size — You can compress only objects and parts whose content is larger than a specified size. HCP compresses the parts of multipart objects individually based on the size of the part. HCP never compresses objects or parts smaller than seven KB.
•Location — You can exclude from compression objects and parts that are located in a specified directory or in any subdirectories of that directory, recursively.
•Name — You can exclude from compression objects and object parts where the object name matches a pattern you specify. For example, you might choose to exclude objects with names that match *.jpg because the data for this type of object is already highly compressed.
To be eligible for compression, an object or part must meet all the criteria you specify.
Chunks of erasure-coded objects and parts are compressed based on the eligibility of the applicable object or part.
![]() |
Notes: •The criteria you specify apply across all namespaces. •HCP always compresses old versions of objects, regardless of age, size, and any specified exclusion criteria. •HCP does not compress parts of in-progress multipart uploads, parts of a multipart upload that have been replaced, parts of an aborted multipart upload, or unused parts of completed multipart uploads. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
How to change compression settings
To change the settings for the Compression/Encryption service, on the Compression page in the System Management Console:
•In the Compression Settings section, configure the settings that you want to use:
oTo compress only objects and parts added to a namespace more than a certain number of days ago, type the number of days in the Compress objects stored more than field. Valid values are integers in the range zero through 40,000.
A value of zero tells the Compression/Encryption service not to use age as a criterion when selecting objects and parts to compress.
oTo compress only objects and parts larger than a certain size, type the size, in KB, in the Compress objects larger than field. Valid values are integers in the range zero though 104,857,600 (100 GB).
A value of zero tells the Compression/Encryption service not to use size as a criterion when selecting objects and parts to compress.
Then click Update Settings.
•To exclude objects and parts from compression based on location or name, specify the criteria for exclusion in the Exclude from Compression list:
oTo add a criterion to the list, type the criterion in the field above the list. Then click Add.
For information about how to specify the criteria in this list, see Exclusion criteria.
oTo remove a criterion from the list, click the delete control ( ) for that criterion.
oTo remove all criteria from the list, click Delete All.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Exclusion criteria
You can exclude objects and parts from compression based on location, name, or a combination of the two. Locations are paths relative to the namespace identification plus any protocol-specific identifiers, such as rest for the HTTP protocol or data for the CIFS protocol.
For object names, you can use patterns. The wildcard character for pattern matching is the asterisk (*), which matches any number of characters of any type, including none.
The format for criteria in the exclude list is:
[/directory-path/]object-name-pattern
The initial forward slash (/) is required with a directory path.
Here are some examples:
•Either of these excludes all objects and parts in the corporate/mktg/graphics directory, as well as all objects and parts in all subdirectories of that directory, recursively:
/corporate/mktg/graphics/*
/corporate/mktg/graphics/*.*
•This excludes all objects and parts with names ending in .jpg:
*.jpg
•This excludes all objects and parts that have names ending in .ppt and that are in the /corporate/hr/benefits directory or any of its subdirectories, recursively:
/corporate/hr/benefits/*.ppt
•This excludes all objects and parts that have names matching 21*_*.* (for example, 2198_John_Doe.doc) and that are in the corporate/hr/employees directory or any of its subdirectories, recursively:
/corporate/hr/employees/21*_*.*
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Duplicate Elimination service
Duplicate elimination is the process of merging the data associated with two or more identical objects. For objects to be identical, their data content must match exactly. By eliminating duplicates, HCP increases the amount of space available for storing additional objects.
For example, if the same document is added to several different directories, duplicate elimination ensures that each copy of the document content that HCP must maintain in the repository is stored in only one location. This saves the space that would have been used by the additional copies of the document.
For the purpose of duplicate elimination, HCP treats these as individual objects:
•Parts of multipart objects
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
•Full copies of the data for objects and parts that are subject to erasure coding before those copies are reduced to chunks
The Duplicate Elimination service does not merge parts of in-progress multipart uploads, parts of a multipart upload that have been replaced, parts of an aborted multipart upload, or unused parts of completed multipart uploads.
The Duplicate Elimination service runs according to the active service schedule. For information about service schedules, see Scheduling services.
![]() |
Note: The Duplicate Elimination service does not eliminate duplicate objects stored in namespaces that use service plans that have S Series storage devices set as the ingest tier. For information about ingest tiers, see Choose the ingest tier. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Duplicate Elimination service processing
HCP performs duplicate elimination by first sorting objects, parts, and chunks according to their MD5 hash values. After sorting all the objects, parts, and chunks in the repository, the service checks for objects, parts, and chunks with the same hash value. If the service finds any, it compares the object, part, or chunk content. If the content is the same, the service merges the object, part, or chunk data but still maintains the required number of copies of the data that’s specified in the service plan for the namespace that contains the object, part, or chunk.
The metadata for each merged object, part, or chunk points to the merged object, part, or chunk data. The Duplication Elimination service never deletes any of the metadata for duplicate objects, parts, or chunks.
The figure below shows duplicate elimination for two objects with the same content where the DPL is two.
These considerations apply:
•The Duplicate Elimination service does not merge objects, parts, and chunks smaller than seven KB.
•The Duplicate Elimination service does not merge the data for chunks with the data for objects and parts that are not erasure coded.
•If the Duplicate Elimination service merges the data for a whole object that's subject to erasure coding and then merges the data for applicable chunk after the object is erasure coded, only the merge of the whole object data is included in the duplicate elimination statistics.
•The Duplicate Elimination service does not merge data that’s stored on extended storage.
•For objects, parts, and chunks stored on primary running storage, the Duplicate Elimination service generally merges objects, parts, and chunks from different namespaces only if the namespaces have the same ingest tier DPL.
•For objects, parts, and chunks stored on primary spindown storage, the Duplicate Elimination service generally merges objects, parts, and chunks from different namespaces only if the namespaces have the same primary spindown storage tier DPL.
•For the purpose of duplicate elimination, HCP considers an object, part, or chunk stored on extended storage to have a DPL that’s one less than the ingest tier DPL that’s specified in the service plan for the namespace that contains the object, part, or chunk. So, for example, the Duplicate Elimination service will merge objects, parts, and chunks stored on primary running storage in a namespace that has an ingest tier DPL of 1 with objects stored on extended storage in a namespace that has an ingest tier DPL of 2.
For information about ingest tier DPL, see Ingest tier data protection level.
•The Duplicate Elimination service may bypass merging certain objects until it reprocesses them. This can happen with:
oObjects stored with CIFS or NFS that are still open due to lazy close
oObjects stored with CIFS or NFS that do not immediately have MD5 hash values
For information about lazy close, see Using a Namespace or Using the Default Namespace. For more information about cryptographic hash values, see Content Verification service.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Understanding duplicate elimination statistics
The Duplicate Elimination page in the HCP System Management Console shows statistics about duplicate-eliminated objects, parts, and chunks.
![]() |
Roles: To view the Duplication Elimination Status panel, you need the monitor or administrator role. |
To display the Duplicate Elimination page, in the top-level menu of the System Management Console, select Services ► Duplicate Elimination.
The Duplication Elimination page shows:
•Total objects and object parts merged — The total number of these items for which data was merged since HCP was installed: objects, parts of multipart objects, chunks for erasure-coded objects, and chunks for erasure-coded parts of multipart objects.
•Total bytes saved from duplicate elimination — The total number of bytes of storage freed due to duplicate elimination since HCP was installed.
The amount of storage freed when you merge duplicates is the size of the data times one less than the number of objects, parts, and chunks merged, times the total number of copies that HCP needs to maintain on primary storage to comply with the ingest tier DPL and primary spindown storage DPL (if applicable) specified in the applicable service plans and to satisfy all protection set requirements.
HCP increases both of these numbers when duplicate data is deleted but does not subtract from these numbers when duplicate-eliminated objects are deleted from the repository.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Disposition service
The DIsposition service automatically deletes expired objects. An object is expired if either of these is true:
•The object has a retention setting that’s a specific date and time, and that date and time is in the past.
•The object has a retention setting that’s a retention class, and the date and time calculated from the duration specified by the retention class is in the past. In this case, the DIsposition service deletes the object only if the retention class has disposition enabled.
The DIsposition service deletes only the current version of a versioned object. It does not delete old versions.
The DIsposition service is enabled or disabled both at the HCP system level and on a per-namespace basis. Enabling disposition for a namespace has no effect if the service is disabled at the HCP system level.
By default, when the HCP system is first installed, the DIsposition service is disabled at the system level.
The DIsposition service runs according to the active service schedule. When the service runs, it checks each object to see whether the object is expired. If the object is expired, the service checks whether disposition is enabled for the namespace that includes the object.
If an object is expired and in a namespace with disposition enabled, the service hides the object data and metadata and marks the object for deletion. The Garbage Collection service then deletes the object through its normal processing. When applicable, the deletion triggers the Shredding service.
For information about:
•Retention settings, see Retention policy
•Retention classes, see Managing a Tenant and Its Namespaces or Managing the Default Tenant and Namespace
•The service schedule, see Scheduling services
•Shredding service, see Shredding service
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Garbage collection service
The Garbage Collection service ensures that HCP storage doesn’t fill up with data that’s no longer needed.
The Garbage Collection service runs according to the active service schedule. For information about service schedules, see Scheduling services.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Garbage collection service processing
The Garbage Collection service performs several different functions, including object deletions and transaction log cleanup.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Object deletions
Object deletions happen like this:
•When a client or the DIsposition service deletes an object, HCP hides the object data and metadata, marks the object for deletion, and if possible, immediately deletes it.
•When a client purges an object, HCP hides the data and metadata for all versions of the object, marks them all for deletion, and if possible, immediately deletes them all.
•When HCP prunes a version of an object, HCP hides the data and metadata for that version, marks the version for deletion, and if possible, immediately deletes it.
•When a client replaces a part during a multipart upload, HCP hides the replaced part and marks the part for deletion.
•When a client aborts a multipart upload, HCP hides the parts of the multipart upload that have already been written and marks those parts for deletion.
•When a client completes a multipart upload, HCP hides any parts that were written for the multipart upload but not included in the completion and marks those parts for deletion.
•When the Garbage Collection service runs:
oIt looks for hidden objects and parts. If it finds such objects or parts marked for deletion, it deletes them.
oIt looks for objects and parts left by failed writes through the HTTP, WebDAV, and SMTP protocols. If it finds such objects or parts, it deletes them.
oIt looks for multipart uploads that should be automatically aborted. If it finds such a multipart upload, the Garbage Collection service hides the parts of the multipart upload that have already been written, marks those parts for deletion, and, on a subsequent run, deletes them.
For information about multipart uploads, see
In all cases, when applicable, deletion triggers the Shredding service.
![]() |
Note: If an object or part has been erasure coded, the Garbage Collection service works on the applicable chunk in the same way the service works on objects and parts that are not erasure coded. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Transaction log cleanup
HCP maintains a transaction log of all create, delete, purge, prune, and disposition operations performed on objects. HCP uses this log to respond to operation-based queries issued through the metadata query API.
HCP adds and deletes records in the transaction log as follows:
•When a client creates an object, HCP adds a creation record to the log.
•When a client deletes an object from a namespace that has versioning enabled without specifying a version to be deleted, HCP adds a deletion record to the log but does not delete the creation record.
•When a client deletes a specified version of an object from a namespace that has versioning enabled, HCP deletes the applicable creation record from the log and adds a deletion record.
•When a client deletes an object from a namespace that does not have versioning enabled, HCP deletes the applicable creation record from the log and adds a deletion record.
•When a client purges an object, HCP deletes all the creation and deletion records for all versions of the object from the log and adds a purge record for the most recent version.
•When HCP prunes a version of an object, it deletes the applicable creation record from the log and adds a prune record.
•When the DIsposition service deletes an object, HCP deletes the applicable creation record from the log and adds a disposition record.
Deletion, purge, prune, and disposition records contain only object metadata. You can configure the Garbage Collection service to delete these records after a specified amount of time. If you do this, each time the service runs, it checks the log for records that are eligible to be deleted and, if it finds any, deletes them.
If you don’t configure the Garbage Collection service to delete deletion, purge, prune, and disposition records from the transaction log, they remain in the log indefinitely.
For any given namespace, the applicable tenant administrator can choose whether HCP should keep records of delete, purge, prune, and disposition operations if the namespace has ever had versioning enabled. If the tenant administrator chooses not to keep these records, they are immediately eligible to be deleted from the log regardless of the Garbage Collection service configuration.
While the transaction log contains any deletion, purge, prune, or disposition records for a namespace, the namespace cannot be deleted. If a tenant administrator cannot delete an apparently empty namespace, a possible reason is that the transaction log contains one or more of these records. In this case, have the tenant administrator disable the option to keep these records for that namespace.
![]() |
Note: A namespace with versioning enabled can be deselected from replication while the owning tenant is included in an active/active replication link. In this situation, deletion, purge, prune, and disposition records for objects in the namespace are not deleted from the transaction log, regardless of the Garbage Collection service configuration, unless the namespace option to keep those records is disabled. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Other Garbage Collection service functions
In addition to the functions described in Object deletions, the Garbage Collection service:
•Deletes data and metadata left in the repository by unsuccessful or interrupted write operations.
•Deletes extra copies of objects, parts, and chunks that are marked for deletion. For example, the following series of events could occur:
1.A logical volume fails on primary running storage.
2.The Protection service detects the failed volume and creates a new copy of each object, part, and chunk stored on that volume.
3.The volume comes back online, so the extra object, part, and chunk copies that the Protection service created are no longer needed.
4.The Protection service finds the extra copies that it created and marks them for deletion.
5.The Garbage Collection service detects the object, part, and chunk copies marked for deletion, verifies that they are extra copies, and deletes them.
In all cases, when applicable, the deletion of an object, part, or chunk triggers the Shredding service. For information about the Shredding service, see Shredding service.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Configuring the Garbage Collection service
The Garbage Collection page in the HCP System Management Console lets you set the length of time to keep deletion, purge, prune, and disposition records in the transaction log.
![]() |
Roles: To view the Garbage Collection page, you need the monitor or administrator role. To configure the Garbage Collection service, you need the administrator role. |
To display the Garbage Collectionpage, in the top-level menu of the System Management Console, select Services ► Garbage Collection.
To configure the Garbage Collection service, on the Garbage Collection page:
1.Take one of these actions:
oTo delete deletion, purge, prune, and disposition records from the transaction log after a set period of time:
–Select Keep deletion records in the transaction log for.
–In the days field, type the number of days you want these records to remain in the transaction log. Valid values are integers in the range zero through 999. Zero means delete the records immediately.
oTo keep delete deletion, purge, prune, and disposition records in the transaction log indefinitely, select Keep deletion records in the transaction log forever.
By default, the Garbage Collection service is configured to delete deletion, purge, prune, and disposition records from the transaction log after 90 days.
2.Click Update Settings.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Capacity Balancing service
The Capacity Balancing service ensures that the percent of HCP storage space used on the storage nodes in the system remains roughly equivalent across the nodes when new nodes are added.
When the Capacity Balancing service runs, it evaluates the storage level for each node without regard to the individual logical volumes the node manages (the amounts of available storage may vary greatly among those volumes). If the storage levels for the nodes differ by a wide margin, the service moves objects around to bring the levels closer to a balanced state.
For the purpose of capacity balancing, HCP treats these as individual objects:
•Parts of multipart objects
•Parts of in-progress multipart uploads
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
The Capacity Balancing service runs only when started manually. Typically, an authorized HCP service provider starts this service after adding new storage nodes to the system.
![]() |
Roles: To run the Capacity Balancing service, you need the service role. |
![]() |
Note: The Capacity Balancing service does not balance the storage space across nodes for data stored in namespaces that use service plans that have S Series storage set as the ingest tier. For information about ingest tiers, see Choose the ingest tier. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Capacity Balancing service processing
The Capacity Balancing service has two main functions: detecting imbalances in storage availability across nodes and repairing those imbalances.
Detecting capacity imbalances
To detect imbalances in storage usage, the Capacity Balancing service compares node storage usage statistics.
Repairing capacity imbalances
If the Capacity Balancing service determines that storage usage is imbalanced across nodes:
1.The service determines whether the storage managed by each node is a source of objects to move or a target to move them to.
2.From the storage for each source node, the service moves objects one at a time to storage managed by a target node as long as these conditions apply:
oThe percent of space that’s free on the source node is less than or equal to the average percent of free space on all the nodes in the system.
oThe percent of space that’s free on the target node is greater than the average percent of free space on all the nodes in the system.
oThe storage managed by the target node doesn’t have a copy of the object to be moved.
When selecting objects to move, the Capacity Balancing service considers the size not only of the object data but also of any custom metadata the object includes.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Maintaining capacity balance
HCP is unlikely ever to be in a perfectly balanced state. Two factors contribute to this:
•Additions and deletions of objects to and from the system do not trigger Capacity Balancing service runs.
•When all the objects in a directory have been deleted, the empty directory remains in the namespace. Directories in the default namespace, whether empty or not, have metadata, which takes up space.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Storage Tiering service
Each namespace has a service plan that defines both a storage tiering strategy and a data protection strategy for the objects in that namespace. At any given point in the lifecycle of an object, its storage tiering strategy specifies the types of storage on which copies of that object must be stored, and its data protection strategy specifies the number of object copies that must be stored on each type of storage.
The Storage Tiering service performs these functions according to rules specified in service plans:
•Moving copies of the objects in a given namespace among all of the storage tiers that are defined for that namespace by its service plan (see Moving copies of objects among storage tiers)
•Creating and deleting copies of objects in a given namespace on each storage tier that’s defined for that namespace to ensure that each tier always contains the correct number of copies of each object (see Maintaining the correct number of object copies on each tier)
•Changing objects stored on primary running storage to be metadata-only or restoring data to metadata-only objects (see Making objects metadata-only)
For information about service plans, see Working with service plans.
For the purpose of storage tiering, HCP treats these as individual objects:
•Parts of multipart objects. These parts are tiered based on the time of completion of the multipart upload that created the object.
•Chunks for erasure-coded objects. These chunks are tiered based on the object ingest time.
•Chunks for erasure-coded parts of multipart objects. These chunks are tiered based on the time of completion of the multipart upload that created the object.
The Storage Tiering service does not tier parts of in-progress multipart uploads.
The Storage Tiering service tiers full copies of the data for objects and parts that are subject to erasure coding if the target storage tier is primary running storage, S Series storage, or primary spindown storage. The service does not tier full copies of the data for objects and parts that are subject to erasure coding if the target storage tier is extended storage.
The Storage Tiering service can move objects and parts that are subject to erasure coding to a metadata-only storage tier before they are due to be reduced to chunks, provided that a full copy of the object or part data exists on at least one other system. While on the metadata-only tier, the object or part has metadata but no data.
If an object or part on a metadata-only storage tier is due to be reduced to a chunk, the Geo-distributed Erasure-Coding service gets the applicable chunk from another system. That service then removes the object or part from the metadata-only tier and stores the chunk for the object or part on the previous tier, as specified by the applicable service plan.
The Storage Tiering service does not move chunks for erasure-coded objects and parts to metadata-only storage tiers.
The Storage Tiering service runs according to the active service schedule. For information about service schedules, see Scheduling services.
![]() |
Important: HCP S Series Nodes run the risk of reaching maximum storage capacity. Objects do not tier to S Series Nodes that are full. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Moving copies of objects among storage tiers
One of the functions of the Storage Tiering service is to move copies of objects in a namespace among storage tiers that are defined for that namespace by its service plan. The tier on which HCP initially stores every object in the namespace is called the ingest tier. The ingest tier must be either primary running storage or S Series storage.
For each storage tier, including the ingest tier, the service plan for a given namespace specifies:
•The storage pools that are used to store copies of each object on the tier. Each storage pool consists of one or more storage components. Each storage component represents a type of primary storage (running or spindown), S Series storage, an extended storage device, or a cloud storage service endpoint.
•For each object that’s stored on the tier, the number of copies of the object data that HCP must maintain on each storage pool and the number of copies of object metadata that HCP must maintain on the ingest tier.
•The transition criteria for each tier except for the ingest tier. The transition criteria for a storage tier are the rules that determine when one or more copies of each object in the namespace must be stored on the tier:
oThe object age (number of days since ingest) at which one or more copies of the object data must be moved from the previous tier onto this tier
oFor service plans that define exactly two tiers, including the ingest tier, whether a threshold will be applied to the second tier, and if so, the percentage of ingest tier storage capacity that must be used (the threshold) before object data can be moved to the second storage tier
oFor HCP systems with replication enabled, whether objects must be fully replicated before they can be transitioned from the previous tier onto this tier. If replication is disabled for the HCP system, this transition criterion does not appear in the HCP System Management Console.
•For a namespace that’s currently being replicated to another system, whether the copies of the object that are stored on the tier are to be made metadata-only.
Regardless of the transition criteria that are specified for a metadata-only tier, objects are moved to such a tier only after they are replicated. When a replicated object is moved to a metadata-only tier, all existing copies of the object data are deleted from the previous tier and from primary running storage, and the specified number of copies of the object metadata are stored on primary running storage.
•Whether the data for each object stored on the tier is rehydrated (that is, restored on the ingest tier) upon being read from the tier, and if so, the number of days HCP is required to keep a rehydrated copy of object data on the ingest tier.
If the service plan for a given namespace defines multiple storage tiers, then for each object in that namespace, the Storage Tiering service:
•Moves copies of the object data among the storage tiers that are defined for the namespace to satisfy the transition criteria that are defined for each storage tier.
•Upon moving all existing copies of the data for an object from one tier to another:
oIf the new tier has a different DPL from the previous tier, creates or deletes the number of copies of object data that’s required to satisfy the DPL setting for the new tier
oIf the new tier has a different primary running storage metadata protection level (MPL) from the previous tier, creates or deletes the number of copies of object metadata that’s required to satisfy the MPL setting for the new tier
•Upon moving a replicated object to a metadata-only tier, deletes all copies of the object data from the previous tier, and if the previous tier is not the ingest tier, deletes any copies of the object data that exist on primary running storage.
•Checks to see whether the object data has been read from a storage tier for which rehydration is enabled, and if so, creates an extra copy of the object data on the ingest tier.
•After moving a replicated object to a metadata-only tier for which rehydration is enabled and making that object metadata-only, checks to see whether that object has been read from a remote system, and if so, restores the data to each copy of the object that’s stored on the ingest tier.
For information about creating and configuring service plans and assigning each plan to a namespace, see Working with service plans.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Maintaining the correct number of object copies on each tier
Another function of the Storage Tiering service is to maintain the correct number of copies of each object in a namespace on each storage tier that’s defined for that namespace by its service plan.
If the number of object copies on a storage tier is less than the number of object copies specified for that tier in the applicable service plan, the Storage Tiering service creates the applicable number of new copies of that object on that tier. If the number of copies of an object on a storage tier is higher than the number of object copies specified for that tier in the applicable service plan, the Storage Tiering service deletes all unnecessary copies of that object from that tier.
Differences between the Storage Tiering service and the Protection service
The Protection service performs work that is nearly identical to the work performed by the Storage Tiering service to maintain the correct number of copies of object data and metadata on each service tier that’s defined for a namespace. However, the two services perform the work that they do in slightly different ways.
The Storage Tiering service runs only when it’s scheduled to run. When the Storage Tiering service processes an object in a given namespace, the Storage Tiering service first checks to see whether copies of the object data are stored on the correct storage tier and moves the object data among tiers if necessary. The Storage Tiering service then checks to see whether the correct number of object copies exists on each tier that’s defined for the namespace and takes corrective action if necessary.
The Protection service runs when it’s scheduled to run and in response to its triggers (see Protection service triggers). When Protection service processes an object in a given namespace, the service first checks to see whether the correct number of copies of the object exist on all storage tiers. If not, the Protection service first checks to see whether the correct number of object copies exist on the active storage tier (the one on which the object is currently supposed to be stored) and takes corrective action if necessary. The Protection service then checks to see if the correct number of object copies exists on the other storage tiers and takes corrective action if necessary.
The Storage Tiering service is designed to optimize storage utilization. The Storage Tiering service, therefore, first moves objects among storage tiers and then checks to make sure all copies of each object in a given namespace have been stored on the correct storage tiers.
The Protection service is designed to optimize data availability and maintain the correct level of data redundancy for each object in a given namespace. The Protection service, therefore, constantly checks to see whether the correct number of copies of the object data are available to clients, and takes corrective action as soon as a violation occurs. When the Protection service runs on a schedule, it checks the availability of each object on the active storage tier first, and then checks whether the correct number of objects copies exists on the other tiers.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Making objects metadata-only
The third function of the Storage Tiering service is to delete all existing copies of the data for any object that’s moved to a metadata-only storage tier and ensure that the correct number of copies of the metadata for that object are stored on primary running storage.
The Storage Tiering service also restores data for metadata-only objects to the ingest tier. Restoring the data for an object to the ingest tier is called rehydrating the object.
When the Storage Tiering service moves an object off the ingest tier and onto another storage tier, the service removes all copies of the object data from the ingest tier and stores the specified number of copies of the object data on the new storage tier. However, at least one copy of the object metadata must always remain on primary running storage. For each storage tier that’s defined for a given namespace, the service plan specifies the number of copies of object data that must be stored on the tier and the number of copies of object metadata that must be stored on primary running storage.
If a given namespace is being replicated to another system, you can configure the service plan for that namespace to define a metadata-only storage tier. This type of tier specifies the number of copies of object metadata that must be stored on primary running storage, but it also specifies that no copies of the object data can be stored on any storage tier, including the ingest tier.
For a multipart object on a metadata-only storage tier, the applicable number of copies of the metadata for each part must be stored on primary running storage, and no copies of the part data can be stored on any storage tier.
The Storage Tiering service makes objects metadata-only only when all of these conditions are true:
•The service plan for the namespace that contains the object defines a metadata-only storage tier.
•The object is on the storage tier that immediately precedes the metadata-only tier defined in the namespace service plan, and the object meets the transition criteria specified for the metadata-only storage tier.
•A copy of the object data exists on at least one other HCP system in the replication topology in which the current system participates. (This is possible because service plans with the same name can have different definitions on different systems.)
When all of these conditions are true, the Storage Tiering service deletes all copies of the object data from the preceding storage tier. If the preceding storage tier is not primary running storage, the Storage Tiering service also deletes any copies of the object data that exist on primary running storage. After deleting all copies of the object data, the Storage Tiering service creates or deletes copies of the object metadata on primary running storage as necessary so that the number of copies of object metadata match the number of copies that the service plan requires for the metadata-only tier.
If rehydration is enabled for a metadata-only storage tier, when rehydrating a replicated object that’s been read from primary running storage on a remote system, the Storage Tiering service rehydrates all copies of the object on the ingest tier on the local system.
When replicating an object in a namespace to a system on which objects in that namespace can be made metadata-only, HCP replicates only the object metadata if the object is larger than one MB. If the object is smaller than one MB, HCP replicates both the data and metadata.
Here’s a scenario that shows how allowing metadata-only objects can be used to advantage:
You have a many-to-one replication topology in which the HCP systems at the outlying sites are much smaller than the central HCP system to which they all replicate. To optimize the use of storage on the outlying systems, you allow the namespaces on those systems to have metadata-only objects while requiring the central system to have the object data. The outlying systems respond to client requests for object data by reading the data from the central system.
In this scenario, the replication topology should include a disaster recovery system (that is, a replica of the central system) to protect against data loss in case of a catastrophic failure of the central system.
![]() |
Important: HCP does not prevent you from removing a namespace from a replication topology even if the namespace contains metadata-only objects on one or more systems in that topology. This can result in data for objects in that namespace being permanently inaccessible from those systems. In most cases, HCP warns you if the modification you’re making to a replication link would cause this condition to occur. |
![]() | Note: For the HDDS search facility to index the data for metadata-only objects, the objects must be rehydrated. |
For more information about replication, see Replicating Tenants and Namespaces.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Storage Tiering service processing
The Storage Tiering service processes one object at a time. For each object, the service checks the applicable service plan to determine the storage tiers on which copies of the object data should be stored, the number of copies of the object data that should be stored on each tier, and the number of copies of the object metadata that should be stored on primary running storage. The Storage Tiering service then checks to see whether the object data has been read from a storage tier for which rehydration is enabled. Finally, the Storage Tiering service checks to see whether the object data has been read from a remote system because that object is metadata-only on the local system, and if so, the service checks to see whether rehydration is enabled for the metadata-only tier on which the object resides.
For each object in a namespace, if all of these conditions are true, the storage plan takes no action on that object:
•The object is stored on the correct storage tier.
•The correct number of copies of the object data exist on the current storage tier.
•The correct number of copies of the object metadata exist on primary running storage.
•If the object is on a storage tier for which rehydration is enabled, the correct number of rehydrated copies of the object exist on the ingest tier.
If one or more of the above conditions is not true, the Storage Tiering service takes the applicable actions to bring the object into compliance with the namespace service plan, as described in Moving copies of objects among storage tiers.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Understanding storage tiering statistics
The Storage page in the HCP System Management Console displays graphs and statistics that provide information about the use of primary running storage, primary spindown storage, and each type of extended storage that’s used to store objects in a repository. The Storage page also provides information about metadata-only objects.
![]() |
Roles: To view the Storage page, you need the monitor or administrator role. To modify the configuration of extended storage or to create, modify, or delete service plans, you need the administrator role. |
To display the Storage page, in the System Management Console, click Storage.
For information about using the Storage page to view storage usage statistics and to view metadata-only object creation and storage usage statistics, see Monitoring storage pools and components.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Migration service
The Migration service migrates data off selected storage nodes in either an HCP RAIN or SAIN system or off selected storage arrays in an HCP SAIN system in preparation for retiring those devices. During a data migration, the service copies objects and, if applicable, the metadata query engine index from the selected devices to free storage on the remaining devices. Before you start a data migration, you need to ensure that those devices have enough unused capacity to hold the data to be migrated.
After copying an object, the service deletes it from the source device. Once the migration is complete, you can submit a request to your authorized HCP service provider to finalize the migration and remove the retired devices from the system.
For the purpose of data migration, HCP treats these as individual objects:
•Parts of multipart objects
•Parts of in-progress multipart uploads
•Chunks for erasure-coded objects
•Chunks for erasure-coded parts of multipart objects
![]() |
Important: After a data migration off a storage node in an HCP system is finalized, the system can never again include a node with the same fourth octet in its back-end IP address as that node had. |
The Migration service runs only when you explicitly start a data migration. When the migration is complete, the service stops automatically.
When you start a data migration, the selected nodes or storage arrays automatically become read-only (except for allowing the Migration service to delete objects). After the migration is complete, they remain read-only.
When you start a migration of data off selected nodes in an HCP system, HCP automatically removes any NFS volumes from those nodes and associates those volumes with other nodes in the system.
Typically, when migrating data off nodes, before starting a data migration, you submit a request to your authorized HCP service provider to add new nodes to the HCP system in order to maintain (or increase) the system storage capacity. However, if the nodes not selected for migration have sufficient free space to accommodate all the data to be migrated, adding new nodes before the data migration is not required.
For a SAIN system, before starting a data migration off storage arrays, your SAN storage administrator, working in conjunction with your authorized HCP service provider, needs to add logical volumes (LUNs) from new or existing storage arrays to any nodes on which all the existing LUNs on all the existing arrays are being retired. Migrated data, however, can be written to any node, and does not necessarily have to be written to the same node from which the data is being migrated.
The HCP system cannot be upgraded while a data migration is in progress. Before the system can be upgraded, you need to either allow the migration to finish or cancel the migration. If you cancel the migration, you can configure a new migration of data off the same devices after the system is upgraded.
![]() |
Important: To prevent data loss in namespaces that are not being replicated and that have service plans that set the ingest tier DPL to 1, always migrate data off a device before submitting a request to your authorized HCP service provider to remove the device from the HCP system. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Considerations for migrations on RAIN systems
Using the Migration service to retire nodes in a RAIN system entails removing nodes from the system and, optionally, adding new nodes. After any new nodes are added to the HCP system but before you begin the data migration, you need to:
•For each combination of domain and network configured in the DNS, remove the IP addresses of the nodes being retired and add the IP addresses of any new nodes
•For each replication link that identifies the HCP system by its IP addresses, remove from the link configuration the IP addresses of the nodes that are being retired and add the IP addresses of any new nodes
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Target storage requirements for SAIN systems
The information in this section is intended for your SAN storage administrator. It outlines storage requirements that, if not met, prevent a data migration from being started.
Each node in an HCP SAIN system must have one OS LUN and at least two data LUNs. If a LUN being migrated is the OS LUN for a node, a replacement for that LUN must be added to the node before the data migration can occur. If the existing LUN is number zero, the new LUN must be number 128. If the existing LUN is number 128, the new LUN must be number zero. Additionally, the new LUN must have a capacity of at least 30 GB.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Migration procedure
The complete procedure for retiring a device is:
1.Take one of these actions:
oOptionally, for a RAIN system, submit a request to your authorized HCP service provider to add one or more storage nodes to the system. After a data migration is finalized, the HCP system must still have at least four storage nodes.
oFor a SAIN system, either submit a request to your SAN storage administrator and your authorized service provider to add one or more storage nodes to the system or work together with your authorized service provider to add LUNs to the nodes on which all of the existing LUNs are on the storage arrays that you’re retiring.
2.For a RAIN system, update the DNS and any replication links as needed. For more information about this, see Considerations for migrations on RAIN systems.
3.Configure the data migration by selecting the devices to be retired. HCP can perform only one data migration at a time. Therefore, you should select all of the devices that you want to retire so that you don’t have to run multiple sequential data migrations.
![]() |
Note: Certain hardware errors, such as a degraded RAID group on a source or target node, prevent you from configuring a data migration. In such cases, you need to fix the problem before you can continue. |
4.Review the configuration of the data migration.
If the migration configuration is not acceptable, HCP provides detailed information about the problems.
5.Submit requests to your authorized HCP service provider and/or your SAN administrator (if you’re migrating storage off a SAIN system), as necessary, to fix any reported problems.
6.Optionally, enter a description for the data migration and/or change the performance level for the Migration service.
7.Ensure that all of the nodes in the HCP system are running and healthy.
8.Start the data migration.
If any nodes become unavailable while the Migration service is running, the service stops migrating data. When those nodes become available, the service automatically starts migrating data again.
9.Monitor the data migration and manage it by changing the performance level or pausing the migration, as needed. You can also modify the migration description at any time (for example, to record when and how long the migration was paused).
10.When the data migration is complete (that is, the migration status is Migrated):
oIf a migration report is available, review it. This report identifies tenants that own namespaces containing unacknowledged irreparable objects. For the default tenant and for HCP tenants that are configured to allow system-level users to manage them, the report also lists the unacknowledged irreparable objects in those namespaces.
![]() |
Note: If the Migration service encounters one or more objects that it cannot migrate, it marks those objects as irreparable (if they weren’t already marked that way). |
oIf the data migration statistics show that not all objects were migrated, contact your authorized service provider for help.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Migration page
The Migration page in the HCP System Management Console lets you configure, monitor, and manage data migrations.
![]() |
Roles: To monitor data migrations, you need the monitor or administrator role. To configure and manage data migrations, you need the administrator role. |
To display the Migration page, in the top-level menu of the System Management Console, select Services ► Migration.
![]() |
Note: You can also perform a migration using the Retire Primary Storage wizard, which walks you through the data migration process that’s outlined in this chapter. You can access this wizard from the Retirement panel on the Storage page in the System Management Console. For information about using this wizard, see Retiring primary storage devices. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Configuring a data migration
When configuring a data migration on a RAIN system, you select nodes to be retired. When configuring a data migration on a SAIN system, you can either select nodes or storage arrays for retirement.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Configuring a migration on a RAIN system
To configure a data migration on a RAIN system:
1.On the left side of the Migration page, click Configuration.
The Configuration panel displays step 1 (one) of the migration configuration (Choose items for migration). The Select Hardware for Retirement section lists the storage nodes in the HCP system.
2.Select the nodes from which you want to migrate the data.
To clear your selections and start over, click Cancel.
3.Click Next.
The Configuration panel displays step two of the migration configuration (Review configuration summary and confirm). The Configuration Summary section in this panel indicates whether the migration configuration is acceptable.
![]() |
Note: When you click Next, HCP checks that the system is in a valid state to perform the migration. This includes checking for degraded RAID groups. This check can take up to 90 seconds. |
If the configuration is not acceptable, you can click view details in the Configuration Summary section to display the specific reasons why. You can also click Configuration Report to download the configuration summary and details to a file. The default name for this file is Configuration-Report.txt.
The Configuration Details section in the step-two panel lists the nodes selected for migration:
oTo change the migration configuration, click Modify Configuration. The Configuration panel redisplays step 1, which shows your current selections.
oTo restart the migration configuration, click Cancel. The Configuration panel redisplays step 1 with all selections cleared.
4.Optionally, add a description of the data migration and/or change the performance level for the Migration service:
a.Click Add description.
b.In the text box that opens, type a description of the migration. This text can be up to 1,024 characters long and can contain any valid UTF-8 characters, including white space.
oTo change the performance level, in the Performance Level field, select Low, Medium, or High. The higher the performance level, the greater the load on the HCP system.
5.Click Start Migration.
The Migration service begins preparing for the data migration, and the Migration page switches to the Overview panel.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Configuring a migration on a SAIN system
To configure a data migration on a SAIN system:
1.On the left side of the Migration page, click Configuration.
The Configuration panel displays step 1 (one) of the migration configuration (Choose items for migration). From this panel you can select to either retire storage arrays or hardware nodes.
2.Take one of these actions:
oRetire Storage Arrays:
1.Select Retire Entire Array.
The Select Hardware for Retirement section lists the storage arrays used by the HCP system. Each array is assigned a number, starting from zero. Below this list, the section shows the number of LUNs currently selected for migration out of the total number of LUNs for each node.
To view additional details about the LUNs, click in the row for the node you’re interested in or click expand all to see details about the LUNs on all nodes. After displaying details for all the LUNs, you can click collapse all to hide the details.
The details shown for each LUN are:
•The number of the array the LUN comes from.
•The LUN number.
•The worldwide identification number (WWID) for the LUN.
•The type of LUN (OS, data, or standby). Standby means that the LUN mapping provides zero-copy-failover support for a data LUN on a different node. For more information about this, see Zero-copy failover behavior.
•The LUN size.
2.Select the storage arrays or storage nodes from which you want to migrate the data.
When you select an array for migration, all HCP LUNs on the array are selected automatically. You cannot select or deselect the LUNs individually.
To clear your selections and start over, click Cancel.
oRetire Hardware Nodes:
1.Select Retire Specific Hardware Nodes.
2.Select the nodes from which you want to migrate the data.
To clear your selections and start over, click Cancel.
3.Click Next.
The Configuration panel displays step two of the migration configuration (Review configuration summary and confirm). The Configuration Summary section in this panel indicates whether the migration configuration is acceptable.
![]() |
Note: When you click Next, HCP checks that the system is in a valid state to perform the migration. This check can take up to 90 seconds. |
If the configuration is not acceptable, you can click view details in the Configuration Summary section to display the specific reasons why. You can also click Configuration Report to download the configuration summary and details to a file. The default name for this file is Configuration-Report.txt. You can send this file to your SAN storage administrator, who can then correct the problems.
The Configuration Details section in the step-two panel lists the devices or nodes selected for migration. It also shows the number of LUNs currently selected for migration out of the total number of LUNs for each node. As in the step-1 panel, you can view details about the selected LUNs. In this case, the details have an additional column, Migration Status, that indicates whether the data on the LUN can (Ready) or cannot (Not Ready) be successfully migrated.
To change the migration configuration, click Modify Configuration. The Configuration panel redisplays step 1, which shows your current selections.
To restart the migration configuration, click Cancel. The Configuration panel redisplays step 1 with all selections cleared.
4.Optionally, add a description of the data migration and/or change the performance level for the Migration service:
a.Click Add description.
b.In the text box that opens, type a description of the migration. This text can be up to 1,024 characters long and can contain any valid UTF-8 characters, including white space.
oTo change the performance level, in the Performance Level field, select Low, Medium, or High. The higher the performance level, the greater the load on the HCP system.
5.Click Start Migration.
The Migration service begins preparing for the data migration, and the Migration page switches to the Overview panel.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Monitoring a data migration
To monitor a data migration, you use the Overview panel on the Migration page. This page shows information both about the current data migration and about the last completed or canceled data migration, if any.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Information about the current data migration
The top of the Overview panel displays this information about the current data migration and the Migration service:
•The current status of the Migration service:
oNot Migrating — The Migration service is not running. No migration is in progress.
oStarting Migration — The Migration service is preparing for the data migration. This includes determining the number of objects to be migrated and the size of the data to be migrated. It also includes changing the HCP system configuration to prevent data from being written to the selected devices.
oMigrating — The Migration service is actively migrating data off the selected devices.
oPaused — A data migration is in progress, but the Migration service is not actively migrating data at this time.
oCompleting Migration — The Migration service is verifying that the migration was successful and waiting while HCP rebalances metadata.
oMigrated — The Migration service has finished migrating data off the selected devices and is no longer running.
If the Overview panel displays a Migration Report link, click the link to download the migration report to a file. The default name for this file is Migration-number-Report.txt, where number is the number automatically assigned to the migration when the copying process started.
Be sure to review the migration report before having your authorized service provider finalize the migration.
•The estimated amount of time remaining to complete the current data migration.
•The amount of time the Migration service has been running. This value does not include any time during which the service was paused.
•The time the Migration service started.
•The total number of items that have been migrated so far out of the total number of items to be migrated, along with a progress bar and text indicating the percent of items migrated. Items are any of these:
oWhole objects
oParts of multipart objects
oParts of in-progress multipart uploads
oChunks for erasure-coded objects
oChunks for erasure-coded parts of multipart objects
•The amount of data migrated so far, in KB, out of the total amount of data to be migrated, along with a progress bar and text indicating the percent of data migrated.
•The current performance-level setting for the Migration service.
•The description of the current data migration.
1.Click the Edit description link.
2.In the text box that opens, edit the migration description.
3.Click Submit.
To view the configuration of the current data migration:
1.Click the View details link.
The Current Migration Details window opens. This window shows the same Configuration Summary and Configuration Details sections as step-two of the Configuration panel.
2.After viewing the migration configuration, click Close.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Information about the last data migration
The Migration History section in the Overview panel displays this information about the last completed or canceled data migration:
•The time at which the data migration was completed or canceled.
•The total amount of time the Migration service took to perform the data migration. This value does not include any time during which the service was paused.
•The number of items that were migrated out of the total number of these items that were to be migrated, along with a progress bar and text indicating the percent of items migrated. Items are any of these:
oWhole objects
oParts of multipart objects
oParts of in-progress multipart uploads
oChunks for erasure-coded objects
oChunks for erasure-coded parts of multipart objects
•The amount of data migrated, in KB, out of the total amount of data that was to be migrated, along with a progress bar and text indicating the percent of data migrated.
•The description of the data migration.
To modify the description:
1.Click Edit description.
2.In the text box that opens, edit the migration description.
3.Click Submit.
To view the configuration of the data migration:
1.Click View details.
The Previous Migration Details window opens. This window shows the same Configuration Summary and Configuration Details sections as step-two of the Configuration panel.
2.After reviewing the migration configuration, click Close.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Managing a data migration
While the Migration service is migrating data, you can use the Management panel on the Migration page to:
•Change the performance level of the Migration service
•Change the description of the data migration
•Pause or resume the data migration
•Cancel the data migration
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Changing the performance level
The performance level determines how much load the Migration service puts on the HCP system. If the system load from other activities is heavy, you can lower the performance level for the Migration service to make more system resources available to those activities. If the system load from other activities is light, you can increase the performance level for the Migration service, thereby allowing the service to use more system resources.
To change the performance level for the Migration service, in the Management panel:
1.In the Performance Level field, select Low, Medium, or High.
2.Click Update Settings.
3.In response to the confirming message, click Update Settings.
HCP pauses the data migration and changes the performance level.
4.Click Resume.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Pausing or resuming a migration
You can pause or resume the data migration at any time while the Migration service is copying objects. You would do this, for example, if you need to make changes to HCP networking or during periods of heavy namespace activity. While the migration is paused, the selected devices remain read-only.
To pause or resume a data migration, in the Management panel, click Pause or Resume, as applicable.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Canceling a migration
You can cancel the data migration at any time while the Migration service is copying objects or while the migration is paused. When you do this, the Migration service stops and the selected devices become read-write. Any data that was already migrated remains in its new location. Additionally, information about the data migration moves to the Migration History section in the Overview panel.
To cancel a data migration, in the Management panel:
1.Click Cancel.
2.In response to the confirming message, click Cancel Migration.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Scheduling services
The Protection, Content Verification, Scavenging, Compression/Encryption, Duplicate Elimination, Disposition, Garbage Collection, Storage Tiering, and Geo-distributed Erasure Coding services run according to a schedule. A service schedule specifies time periods during which one or more of these services are scheduled to run.
Within a time period, each scheduled service has a performance level of low, medium, or high. The performance level determines how much load the service puts on the HCP system. The higher the performance level, the greater the load.
![]() |
Note: The Replication service also runs according to a schedule, but this service has its own scheduling interface. |
HCP comes with a predefined service schedule named HCP Default Schedule. This schedule cannot be modified or deleted.
HCP SAIN systems with spindown storage come with an additional predefined schedule named HCP Spindown Schedule. This schedule is optimized for Storage Tiering service activity against spindown storage. The HCP Spindown Schedule is modifiable.
You can create as many other schedules as you want. However, only one schedule at a time can be active. At any time, you can change which schedule is active.
After creating a service schedule, you can modify or delete it. You can modify a schedule regardless of whether it’s active. You can delete a schedule only while it’s not active.
![]() |
Note: Although you can modify or delete the HCP Spindown Schedule, doing this is not recommended because once the schedule is modified or deleted, you cannot restore it. Instead, create a new schedule based on the HCP Spindown Schedule and then modify the new one. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
How scheduled services work
The Protection, Content Verification, Scavenging, Compression/Encryption, Duplicate Elimination, Disposition, Garbage Collection, Storage Tiering, and Geo-distributed Erasure Coding services each examine objects one at a time, determine whether any action needs to be taken with the object, and if so, take the applicable action. The services, except for Scavenging, start with the primary metadata and use that to find the object data. The Scavenging service starts with the secondary metadata, which is stored with the object data.
If the HCP system does not include any spindown storage, the services look at the object data for each object regardless of where the data is stored.
If the HCP system does include spindown storage, on most days, all scheduled services except duplicate elimination look at the object data on only a subset of the nodes in the system, each day looking at the data on a different set of nodes. This prevents spindown volumes that are spun down from being spun up frequently or for long periods of time. Periodically, however, to address cases where the data for an object spans nodes in different sets, the services don’t restrict the nodes they look at on one day.
The Duplicate Elimination service always looks at object data regardless of which node the data is stored on because the service needs to correlate data from all locations. This can result in all spindown volumes being spun up at the same time. You should keep this in mind when scheduling the Duplicate Elimination service.
The length of time required for a service to examine every object in the repository depends on several factors, including the number of objects in the repository, for how much time the service is scheduled to run each week, and the performance level at which the service runs.
If the HCP system includes spindown storage, services scheduled to run on only one day a week take a minimum of three to five weeks to examine all objects, depending on the number of nodes in the system. You can shorten this time by scheduling the services to run on more than one day a week.
![]() |
Note: Services may examine some objects twice during a run. Rarely, this can result in the reported number of objects examined being larger than the number of objects in the repository. If an irreparable object is examined twice by the Protection or Content Verification service, that object is counted twice in the reported number of violations found. |
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
About the Service Schedule page
The Schedule page in the HCP System Management Console lets you create, view, modify, activate, and delete service schedules, as well as view log messages about service activity. This page has a service legend, a schedule grid, and an optionally displayed list of log messages.
On the schedule grid, each time period in which at least one service is scheduled to run is represented by a rectangle. These rectangles are numbered in the upper left corner for ease of reference.
![]() |
Roles: To view service schedules and log messages about service activity, you need the monitor or administrator role. To create, modify, activate, and delete service schedules, you need the administrator role. |
To display the Schedule page, in the top-level menu of the System Management Console, select Services ► Schedule.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Service legend
The top part of the Schedule page contains a legend that associates each service with an icon. These icons are used to identify services in the schedule grid. The icons are:
•
— Compression/Encryption service
•
— Content Verification service
•
— Disposition service
•
— Duplicate Elimination service
•
— Garbage collection service
•
— Geo-distributed erasure coding service
•
— Protection service
•
— Scavenging service
•
— Storage Tiering service
When you hover over a time period in the schedule grid:
•The legend heading shows the period reference number and the start and end times for the period
•The services scheduled in the time period are highlighted in the legend
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Schedule grid
The schedule grid on the Schedule page shows the weekdays from Sunday through Saturday with each day each broken out into 24 hours. The time periods for a schedule are laid out on this grid.
The heading for each time period shows the period reference number and the start and end times for the period, if they fit in the width of the rectangle. Within each rectangle, either of these is displayed, depending on the size of the rectangle:
•For each scheduled service, the service icon with a bar under it indicating the performance level for the service:
oLow:
oMedium:
oHigh:
•The number of services scheduled to run during the time period.
If a service in one time block conflicts with a service in an overlapping time block, the background color of the rectangle containing the service that does not take precedence is pink ( ). When you hover over the rectangle, the service that does not take precedence is highlighted in red in the service legend. Text below the legend tells you which service in the overlapping time block is involved in the conflict.
For information about which services cannot run at the same time as each other, see Service precedence.
Displaying a service schedule
By default, when you open the Schedule page, the schedule grid shows the active schedule. To display a different schedule, select the schedule you want in the field on the left above the schedule grid.
If the displayed schedule is active, the word Active appears on a green background to the right of the schedule selection field.
Viewing the schedule for an individual service
By default, the schedule grid shows all scheduled services in the time periods for the displayed schedule. You can choose to show only a selected service in the scheduled time periods. To do this, select the service you want in the field on the right above the schedule grid.
To show all services again, select All services in the same field.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Service log messages
HCP writes messages about service activity to the system log. These messages are displayed in the Service Events section on the Schedule page as well as in other displays of the log.
The messages displayed depend on the selection in the field on the right above the schedule grid. If All services is selected, the list of messages includes all service-related messages. If a specific service is selected, the list includes only the messages related to that service.
For information about displays of system log messages, see Understanding the HCP system log.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Service schedule considerations
These considerations apply to service schedules:
•You cannot modify or delete the service schedule named HCP Default Schedule.
•You cannot activate a service schedule that does not include the Garbage Collection service. Likewise, you cannot completely remove the Garbage Collection service from the currently active service schedule.
•If the HCP system includes spindown storage but the currently active service schedule does not include the Storage Tiering service, the Overview page in the System Management Console displays an alert indicating that this situation exists.
•If the HCP system is in an erasure coding topology but the currently active service schedule does not include the Geo-distributed Erasure Coding service, the Overview page in the System Management Console displays an alert indicating that this situation exists.
While the Geo-distributed Erasure Coding service is not running, full copies of the data for objects that are subject to erasure coding are not reduced to chunks. These copies continue to occupy the full amount of storage required for the object data. How fast the amount of used storage on the system increases depends on:
oThe object ingest rate
oThe size of the objects being ingested
oThe Replication service schedule
oWhether the erasure coding topology is configured for full-copy or chunk distribution
•The minimum amount of time for a time period is two hours.
•Time periods can overlap. For example, on a given day, you can have a five-hour time period that starts at 1:00 a.m., a six-hour time period that starts at 1:00 a.m., and a three-hour time period that starts at 3:00 a.m.
•Overlapping time periods cannot include the same service.
•A service that is scheduled in contiguous time periods stops at the end of the first time period and restarts at the beginning of the next one.
•Time periods cannot span days. That is, you cannot create a single time period that starts before midnight on one day and continues after midnight on the next day. However, you can schedule the same service to run in one time period that ends at midnight and another that starts at the beginning of the next day.
•The recommended maximum number of services to schedule in a time period is half the number of hours in the time period, rounded down.
•The more services you schedule to run at the same time, the more the services compete for system resources.
•Each time a service runs, it picks up from where it left off the last time it ran. For more information about service runs, see How scheduled services work.
•After a service completes a full run (that is, after it has examined every object in the repository), it does not start again for at least 24 hours regardless of the service schedule.
•When a service requires spindown volumes to be spun up, it waits while the volumes spin up. This uses up a small amount of time in the time period in which the service is running.
•When a service in one time period preempts a service in another time period, the preempted service stops. If the preempting service stops before the end of the time period containing the preempted service, the preempted service restarts only if at least ten minutes remain in the time period after HCP recognizes that the preempting service has stopped. HCP can take up to five minutes to recognize the stop.
For information about which services preempt which others, see Service precedence.
•When you modify the active schedule, any service that is currently running stops. The service restarts only if the current time is in a time period in which the service is scheduled to run and only if at least ten minutes remain in that time period.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Creating a service schedule
You can create new service schedules from existing schedules. In this case, the new schedule is initially the same as the schedule from which you created it. After creating the schedule, you can modify it in any way you want.
Alternatively, you can create a new schedule by starting with a blank schedule grid.
![]() |
Tip: To facilitate modification of a new schedule, create the schedule from the existing schedule that’s the most similar to what you want the new schedule to be. |
To create a service schedule:
1.Optionally, if you’re creating the new schedule from an existing schedule, in the field on the left above the schedule grid, select the existing schedule.
2.Click Create New Schedule.
3.In the Create New Schedule window:
oType a name for the new schedule. Schedule names must be from one through 64 characters long, can contain only alphanumeric characters, hyphens (-), underscores (_), periods (.), commas (,), and spaces, and are not case sensitive.
You cannot use the name HCP Default Schedule or HCP Spindown Schedule for a schedule you create.
oSelect either From currently loaded schedule to create the schedule from the existing schedule you selected or From blank schedule to start with a blank schedule grid.
4.Click Save.
The schedule grid shows the new schedule. This schedule is not active.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Modifying a service schedule
1.In the field on the left above the schedule grid, select the service schedule you want to modify.
2.Add, modify, or delete time periods in the schedule as needed. For information about these activities, see:
3.Click Update Schedule.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Adding a time period
To add a time period to a service schedule:
1.In the field on the left above the schedule grid, select the service schedule to which you want to add the time period.
2.Take one of these actions in the schedule grid:
oClick in the hour at which you want the time period to start. By default, this defines a two-hour time period. You can change the start and end times before saving the time period.
oClick and drag from one hour (the start time) to another (the end time) in the same day. You can change the start and end times before saving the time period.
oClick an existing time period.
oClick the name of a weekday. This defines a 24-hour time period for that day. When you save this time period, all other time periods that are scheduled for that day are deleted.
oClick All. This defines seven 24-hour time periods — one for each day of the week. When you save these time periods, all other time periods in the schedule are deleted.
An Edit window appears. For a time period other than a weekday or all, the top of this window shows the number of hours in the time period and the start and end times.
The Edit window lists the schedulable services. For each service, the window contains a Level field. The window also indicates the status of the service:
oService not scheduled — The service is not scheduled in the time period you’re editing or in any time period that overlaps the time period you’re editing. The performance level is Off.
oService scheduled — The service is scheduled in the time period you’re editing. The performance level is Low, Medium, or High.
oService already scheduled — The service is scheduled in a time period that overlaps the time period you’re editing. No performance level is shown.
oConflict with service-name service — The service cannot run at the same time as the named service, which is already scheduled to run in the time period you’re editing. No performance level is shown.
3.Optionally, for a time period other than a weekday or all, select a different start time in the From field.
4.Optionally, for a time period other than a weekday or all, select a different end time in the To field.
5.For each service you want to run during the time period, select a performance level of Low, Medium, or High in the Level field. At least one service must be scheduled to run in the time period.
For each service you don’t want to run during the time period, select Off.
6.Take either of these actions:
oIf you started by clicking the grid, dragging, clicking a weekday, or clicking All, click Update Schedule.
oIf you started by clicking an existing time period, click Create New Period. This creates a new time period only if you scheduled a service that was not scheduled in the original time period.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Modifying a time period
To modify an existing time period in a schedule:
1.In the field on the left above the schedule grid, select the service schedule in which you want to modify the time period.
2.In the schedule grid, click the time period you want to modify.
3.In the Edit window, make the changes you want.
4.Click Update Schedule.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Deleting a time period
To delete an existing time period:
1.In the field on the left above the schedule grid, select the service schedule from which you want to delete the time period.
2.In the schedule grid, click the time period you want to delete.
3.In the Edit window, take either of these actions:
oClick Delete Period.
oSet the performance level for all services to Off. Then click Update Schedule.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Setting the active service schedule
To set the active service schedule:
1.In the field on the left above the schedule grid, select the service schedule you want to make active.
In the dropdown list for this field, the currently active service schedule is marked with an asterisk (*).
2.Click Activate Schedule.
The Activate Schedule button is present only if the displayed schedule is not active.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.
Deleting a service schedule
1.In the field on the left above the schedule grid, select the service schedule you want to delete.
You cannot delete the active schedule. If the schedule you want to delete is currently active, activate a different schedule before you display the one you want to delete.
2.Click Delete Schedule.
3.In response to the confirming message, click Delete Schedule.
HCP deletes the schedule, and the schedule grid displays the active schedule.
Trademarks and Legal Disclaimer
© 2015, 2019 Hitachi Vantara Corporation. All rights reserved.