Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Storage components

Within the Hitachi Content Platform for cloud scale (HCP for cloud scale) system, the Object Storage Management application lets you manage and monitor storage components.

The starting point for storage component management is the page Storage in the application Object Storage Management. The procedures in this module begin at this page.

Adding a storage component

You can use the Object Storage Management application or an API method to add a storage component to the HCP for cloud scale system.

TipTo improve performance and availability, and to avoid transfer fees, add storage components that are local to the HCP for cloud scale site.

The storage component must contain an HCP for cloud scale bucket before you can add the storage component to the HCP for cloud scale system.

To add a storage component, it must be available and you need the following information about it:

  • Storage component type
  • Endpoint information (host name or IP address)
  • If an HCP S Series Node storage component, the cluster name, management host name, and administrative user credentials
  • If used, the proxy host and port and the proxy user name and password
  • API port
  • S3 credentials (the access key and secret key to use for access to the storage component bucket)
TipYou can use the HCP S Series Management Console or management API to generate S3 credentials. Only you can generate the S3 compatible API credentials for your user account.

Object Storage Management application instructions

NoteThe storage component must contain an HCP for cloud scale bucket before you can add it.

Procedure

  1. From the Storage page, click Add storage component.

    The ADD STORAGE COMPONENT page opens.
  2. Specify the following:

    1. Name (optional): Type the display name you choose for the storage component, up to 1024 alphanumeric characters.

      If you leave this blank, the storage component is listed without a name.
    2. Type: Select AMAZON_S3, HCP_S3, HCPS_S3 (HCP S Series Node), or GENERIC_S3.

    3. Region (optional): Type a region name of up to 1024 characters.

    4. Endpoint: Type either the IP address or the cluster host name of the storage component. Type as many as 255 URI unreserved characters using only A-Z, a-z, 0-9, hyphen (-), period (.), underscore (_), and tilde (~). The final segment of a host name must not begin with a number.

      For an HCP S Series Node storage component, the host name must be hs3.cluster_name.
  3. In the S3 CONNECTION section, specify the following:

    1. Select the Protocol used, either HTTPS (the default) or HTTP.

    2. If Use Default is selected, the applicable default port number is filled in. If you cancel the selection Use Default, type the Port number.

  4. In the PROXY section, specify the following:

    1. If you select Use Proxy, type values in the Host and Port boxes, and if the proxy needs authentication, type the Username and Password.

  5. In the BUCKET section, specify the following:

    1. Bucket Name: Type the name of the bucket on the storage component. The name can be from 3 to 63 characters long and must contain only lowercase characters (a-z), numbers (0-9), periods (.), or hyphens (-).

      NoteThe bucket must already exist on the storage component and should be empty.
    2. (Optional) To use path-style URLs to access buckets, select Use path style always (the default).

  6. In the AUTHENTICATION section, specify the following:

    1. Type: Select the AWS Signature version, either V2 or V4.

    2. Type the Access Key.

    3. Type the Secret Key.

  7. When you are finished, click Save.

    The storage component is added to the Storage components section of the Storage page with the state ACTIVE.

Results

You have defined a storage component.

If the storage component state is INACTIVE, a configuration value might be incorrect. Select Verify from the More menu for the storage component and click Activate from the window that appears. If configuration errors are detected, correct them and try again.

Related REST API methods

POST /storage_component/create
NoteAfter you define the storage component, if its state is UNVERIFIED, check the parameters you used when defining it.

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Verifying a storage component

You can use the Object Storage Management application to verify a storage componen.

The storage component must be in the state Available before it can be used by the HCP for cloud scale system.

Object Storage Management application instructions

To verify a storage component, select it from the list on the Storage page, and from the More menu select Verify.

The verification process checks for these possible configuration errors:

  • The specified bucket is already in use.
  • The specified bucket does not exist.
  • Th endpoint is incorrect.
  • The secret key or access key is incorrect.
  • Path style addressing is configured but the storage component cannot use it.
  • The authorization type is incorrect.

If configuration errors are detected, edit the storage component configuration to correct them and try again.

Modifying a storage component

You can use the Object Storage Management application or an API method to modify the configuration of a storage component after defining it.

Object Storage Management application instructions

Procedure

  1. From the Storage page, navigate to the storage component you want to edit.

  2. Click the more icon (The more icon, three vertical dots, indicates that additional functions are available on the selected object) icon by the storage component and select Edit.

    The storage component's configuration page appears.
  3. Edit the connection information as needed. When you're finished, click Save.

    The Username field is blank, but the configured value is used unless you are change it.

Results

The storage component is modified.

If the storage component state becomes INACTIVE, a configuration value might be incorrect. Select Verify from the More menu for the storage component and click Activate from the window that appears. If configuration errors are detected, correct them and try again.

Related REST API methods

POST /storage_component/update

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Activating a storage component

You can use the Object Storage Management application or an API method to activate a storage component.

A storage component is displayed as UNVERIFIED if HCP for cloud scale cannot reach the storage component with the supplied parameters or if the storage component is misconfigured.

Object Storage Management application instructions

NoteYou can only activate a storage container that is in the state INACTIVE.

Procedure

  1. From the Storage page, navigate to the storage component you want to activate.

  2. Click the more icon (The more icon, three vertical dots, indicates that additional functions are available on the selected object) of the storage component and then select Set active.

    A message appears and prompts you to confirm your action.
  3. Click Yes, active

    The storage component state changes to ACTIVE.

Results

The storage component is activated.

If the storage component state remains INACTIVE, a configuration value might be incorrect. Select Verify from the More menu for the storage component and click Activate from the window that appears. If configuration errors are detected, correct them and try again.

Related REST API methods

POST /storage_component/update_state

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Deactivating a storage component

You can use the Object Storage Management application or an API method to deactivate a storage component.

You might deactivate a storage component for maintenance purposes.

After you mark a storage component as INACTIVE, read, write, and healthcheck requests are rejected.

Object Storage Management application instructions

NoteYou can only deactivate a storage container that is in the state ACTIVE.

Procedure

  1. From the Storage page, navigate to the storage component you want to deactivate.

  2. Click the more icon (GUID-6E5B088C-D03E-41BE-A69F-60D0B707F598-low.png) of the storage component and then select Set inactive.

    A message appears and prompts you to confirm your action.
  3. Click Yes, inactivate

    The storage component state changes to INACTIVE.

Related REST API methods

POST /storage_component/update_state

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Marking a storage component as read-only

You can use the Object Storage Management application or API methods to mark a storage component as read-only.

Storage components are not automatically marked as read-only mode if they become completely full. You might mark a storage component as read-only if it is nearly full.

Once you mark a storage component as read-only, write requests are directed to different storage components.

You can only mark a storage component as read-only if it is marked read-write and in the state ACTIVE.

Object Storage Management application instructions

Procedure

  1. From the Storage page, navigate to the storage component you want to mark.

  2. Click the more icon (The more icon, three vertical dots, indicates that additional functions are available on the selected object) of the storage component and then select Set read-only.

    A message appears and prompts you to confirm your action.
  3. Click Mark read-only.

    The storage component is marked as read-only.

Related REST API methods

PATCH /storage_component/update
POST /storage_component/update_state

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Marking a storage component as read-write

You can use the Object Storage Management application or API methods to mark a storage component as read-write.

This makes the storage component available for writing new objects.

You can only mark a storage component as read-write if it is marked read-only and in the state ACTIVE.

Object Storage Management application instructions

Procedure

  1. From the Storage page, navigate to the storage component you want to mark.

  2. Click the more icon (The more icon, three vertical dots, indicates that additional functions are available on the selected object) of the storage component and then select Open for writes.

    A message appears and prompts you to confirm your action.
  3. Click Open for writes.

    The Read-only flag for the storage component is marked as No.

Related REST API methods

PATCH /storage_component/update
POST /storage_component/update_state

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Viewing storage components

You can use the Object Storage Management application or an API method to view information about the storage components defined in the system.

For each storage component, you can get information about its name, type, region, and current state.

The storage component types are:

  • AMAZON_S3: An Amazon Web Services S3 compatible node
  • HCP_S3: A Hitachi Content Platform node
  • HCPS_S3: An HCP S Series node
  • GENERIC_S3: An S3 compatible node

The possible storage component states are:

  • Active: Available to serve requests
  • Inactive: Not available to serve requests (access is administratively paused)
  • Inaccessible: Available to serve requests, but HCP for cloud scale is having access issues (for example, network, authentication, or certificate issues)
  • Unverified: Not available to serve requests (unreachable by specified parameters, miconfigured, or awaiting administrative activation)

The storage component state Read-only can be on or off.

Object Storage Management application instructions

The storage components defined in the HCP for cloud scale system are listed in the Storage components section of the Storage page.

Related REST API methods

POST /storage_component/list

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Displaying storage component analytics

The Storage page displays counts of active, inactive, and unverified storage components, the total count of active objects, and information about system-wide total, used, and estimated available storage capacity. The page also displays information about individual storage components and their current capacity.

The Storage page displays several areas of information.

System-wide information

The top area of the page displays the following rolled-up information for HCP S Series Node storage components configured in the system.

Screenshot of top area of Storage page, which shows rolled-up information for HCP S Series Node storage components
  • Total capacity - the total number of bytes available, as well as the change over the past week
  • Used capacity - the total number of bytes used, as well as the change over the past week
  • Estimated available capacity - the total number of bytes unused, as well as the change over the past week
  • Total objects - the total count of objects stored across all storage components, as well as the change over the past week
  • Active storage - the number of storage components that can receive objects
  • Inactive storage - the number of storage components that cannot receive objects
  • Unverified storage - the number of storage components whose state can't be determined

The calculation of used capacity includes:

  • HCP S Series Node storage components configured for capacity monitoring
  • Storage components set to read-only status
  • Storage components that are inactive

Metrics for capacity usage are for Metadata Gateway instances only, so adding used capacity to estimated available capacity will not equal the total capacity on the system. Also, multiple services are running on a system instance, all sharing the disk capacity. Therefore, the estimated available capacity for the Metadata Gateway service on one node can be consumed by a different service running on the same node.

NoteIf the MAPI Gateway service restarts, capacity values are shown as 0 until fresh metrics are obtained.

The calculation of estimated available system capacity does not include:

  • HCP S Series Node storage components not configured for capacity monitoring
  • Storage components other than HCP S Series Node storage components
  • Storage components set to read-only status
  • Storage components that are inactive
Per-storage component information

The central area of the page displays information for each HCP S Series Node storage component configured for capacity monitoring in the system.

Screenshot of central portion of Storage page, which lists information about each storage component
  • User-defined name.
  • Type (HCP S Series Node, displayed as HCPS_S3).
  • AWS region.
  • State:
    • Active - Available to serve requests
    • Inactive - Not available to serve requests (access is administratively paused)
    • Unverified - Not available to serve requests (unreachable by specified parameters, or awaiting administrative activation)
  • Whether or not the storage component is set to read-only status.
  • Disk capacity: A graphical display of used capacity as a percentage of total capacity. You can configure a warning threshold, which is displayed as a red line. If the used capacity is below the threshold the bar is displayed in blue, and if the used capacity exceeds the threshold the bar is displayed in red. If no capacity is used the bar is displayed in gray. For example:

    Example of disk capacity information for two storage components, displayed as bars with used capacity as a percentage of the total. One bar represents a storage component that is below the user-defined capacity threshold; the bar is displayed in blue. The other bar represents a storage component that has exceeded the threshold; the bar is displayed in red.

  • Total capacity (used plus free).
  • Available (estimated) capacity.

Capacity alerts are generated by the MAPI Gateway service. Use the System Management application to configure the capacity alert threshold for individual storage components or the overall system.

A more button (The More menu appears as three dots arranged vertically), to the right of each storage component, opens a menu of actions that you can perform on that storage component:

  • Edit - edit the configuration of the storage component
  • Set inactive | Set active- change the state of the storage component between active and inactive
  • Set read-only | Set read-write - change the status of the storage component between read-only and read-write
Active object information

The bottom area of the page displays a graph over time of the count of active objects stored in the system. The maximum time period is the previous week.

Screenshot of bottom portion of Storage page, which graphs the count of active objects. In the example, the count has gone from zero to nearly 3 million active objects in the past day.

Displaying counts of storage components

You can use the Object Storage Management application or an API method to display counts of storage components in the system.

The page displays the following rolled-up information for HCP S Series Node storage components configured in the system:

  • Active storage - the number of storage components that can receive objects
  • Inactive storage - the number of storage components that cannot receive objects
  • Unverified storage - the number of storage components that are misconfigured or whose state can't be determined

Object Storage Management application instructions

To display storage counts, select Storage.

The infographic displays the count of active, inactive, and unverified storage components.

Related REST API methods

POST /storage_component/list

For information about specific API methods, see the MAPI Reference or, in the Object Storage Management application, click the profile icon and select REST API.

Metrics

HCP for cloud scale uses a third-party, open-source software tool, running over HTTPS as a service, to provide storage component metrics through a browser.

The Metrics service collects metrics for these HCP for cloud scale services:

  • S3 Gateway
  • MAPI Gateway
  • Policy Engine
  • Metadata Coordination
  • Metadata Gateway

By default the Metrics service collects all storage component metrics and you cannot disable collection. By default, the Metrics service collects data every ten seconds (the Scrape Interval) and retains data for 15 days (the Database Retention); you can configure these values in the service by using the System Management application.

NoteMetrics related to HCP for cloud scale instances and services are collected and provided by the System Management application. Collection of these metrics cannot be disabled.

Displaying the active object count

The Object Storage Management application displays a count of active objects stored in the system.

Object Storage Management application instructions

To display the Active objects report, select Storage. The Storage page opens.

The page displays a line graph showing the total number of active objects in the system over time. The maximum time period is one week.

Displaying metrics

You can use the metrics service to display or graph metrics, or use the service API to obtain metrics.

Object Storage Management application instructions

You can display and graph metrics using the metrics GUI.

To display metrics, click the app switcher menu (App Switcher menu (a nine-dot square) lets you select another application) and then select Prometheus. The metrics tool opens in a separate browser window.

The metrics tool is a third-party, open-source package. For information about using the metrics tool, see the documentation provided with the tool.

Available metrics

Metrics provide information about the operation of a service. Metrics are collected while the service is active. If a service restarts, its metrics are restarted.

The metrics described here fall into these categories:

  • Counter - A numeric value that can only increase or be reset to zero. A counter tracks the number of times a specific event has occurred. An example is the number of S3 servlet operations.
  • Gauge - A counter that can increase or decrease. An example of a gauge is the number of active connections.
  • Histogram - A set of grouped samples. A histogram approximates the distribution of numerical data.
NoteIf a metric is measured over an interval (for example, http_s3_servlet_requests_latency_seconds), but doesn't have at least two data points, the value is reported as NaN.
NotePolicy Engine activity can cause a lag in the collection of metrics.
Metrics from all services

The following metrics are available from all services.

MetricDescription
http_healthcheck_requests_totalThe total number of requests made to the health verification API.
http_monitoring_requests_totalThe total number of requests made to the monitoring API.
scheduled_policy_work_itemsThe total number of work items processed by each scheduled policy.

A work item is defined as:

  • DELETE_BACKEND_OBJECTS - Each StoredObjectID in thesu system
  • DELETE_EXPIRED_OBJECTS - Each client object, expired or not, in the system
  • DELETE_FAILED_WRITES - Each client object in the system that is in the state OPEN
  • DELETE_INCOMPLETE_MULTIPARTS - Each in-progress multi-part entry in the system
  • STORAGE_COMPONENT_HEALTH_CHECKS - Each storage component in the system
scrape_duration_secondsThe duration in seconds of the scrape (collection interval).
scrape_samples_post_metric_relabelingThe count of samples remaining after metric relabeling was applied.
scrape_samples_scrapedThe count of samples the target exposed.
up1 if the instance is healthy (reachable) or 0 if collection of metrics from the instance failed.
Data Lifecycle

The following metrics are available from the Data Lifecycle service. Metrics are recorded for the following policies. Not every metric applies to every policy.

  • CHARGEBACK_POPULATION
  • CLIENT_OBJECT_POLICY
  • DELETE_BACKEND_OBJECTS
  • EXPIRE_FAILED_WRITE
  • INCOMPLETE_MPU_EXPIRATION
  • TOMBSTONE_DELETION
  • VERSION_EXPIRATION
NoteAs of v2.4, the policies VERSION_EXPIRATION and EXPIRE_FAILED_WRITE display only historical data for the metrics lifecycle_policy_concurrency and lifecycle_policy_list_latency_seconds.
MetricDescription
lifecycle_policy_accept_latency_secondsThe lifecycle policy acceptance processing latency in seconds.
lifecycle_policy_completedThe total number of lifecycle policies completed.
lifecycle_policy_concurrencyThe total number of threads currently running for the policy.
lifecycle_policy_conflictsThe total number of lifecycle policy conflicts.
lifecycle_policy_deleted_backend_objects_​countThe total number of objects deleted from backend storage by the policy DELETE_BACKEND_OBJECTS.
lifecycle_policy_errorsThe total number of errors that occurred while executing lifecycle policy actions, in the categories:
  • General
  • Listing
  • Metadata
  • S3
lifecycle_policy_examine_latency_secondsThe lifecycle policy examination processing latency in seconds.
lifecycle_policy_expiration_completed_countThe total number of objects completely processed by the expiration policies DELETE_MARKER and PERMANENT_DELETE).
lifecycle_policy_list_latency_secondsThe lifecycle policy listing latency in seconds.
lifecycle_policy_rekey_initiated_countThe number of times a rekey operation has been initiated.
lifecycle_policy_rekeyed_objects_countThe total number of objects rekeyed.
lifecycle_policy_splitsThe total number of lifecycle policy splits.
lifecycle_policy_startedThe total number of lifecycle policies started.
lifecycle_policy_submittedThe total number of lifecycle policies submitted.
s3_operation_countThe count of S3 operations (READ, WRITE, DELETE, and HEAD) per storage component.
s3_operation_error_countThe count of failed S3 operations (READ, WRITE, DELETE, and HEAD) per storage component.
s3_operation_latency_secondsThe latency of storage component operations (READ, WRITE, DELETE, and HEAD) in seconds.
Key Management Server

The following metrics are available from the Key Management Server service. These metrics are collected every five minutes.

MetricDescription
kmip-servers_offlineThe count of KMS servers that are offline. Updated hourly.
kmip_servers_onlineThe count of KMS servers that are online. Updated hourly.
kmip_total_kek_countThe count of key encryption keys stored in the KMS server. This count increments when an HCP S Series Node is added or when a rekey occurs.
lifecycle_policy_rekey_initiated_countThe count of how many times rekeying has been initiated through either the MAPI method or the Object Storage Management application.
lifecycle_policy_rekeyed_objects_countThe total count of data encryption keys that are re-wrapped with key encryption keys.
MAPI Gateway

The following metrics are available from the MAPI Gateway service. These metrics are collected every five minutes.

MetricDescription
storage_available_capacity_bytesThe number of bytes free on an HCP S Series Node.
storage_total_capacity_bytesThe number of bytes total, available and used, on an HCP S Series Node.
storage_total_objectsThe number of objects on an HCP S Series Node.
storage_used_capacity_bytesThe number of bytes used on an HCP S Series Node.

Each metric is reported with a label, store, identifying it as being either from a specific HCP S Series Node or the aggregate total. You can also retrieve the metrics using this label. For example, to retrieve the used storage capacity of the storage component hcps10.company.com, you would specify:

storage_used_capacity_bytes{store="hcps10.company.com"}

To retrieve the number of objects stored on the HCP S Series Node storage component snode67.company.com, you would specify:

storage_total_objects{instance="hcpcs_cluster:9992",job=MAPI-Gateway",store="snode67.company.com"}

To retrieve the used storage capacity of all available storage components, you would specify:

storage_used_capacity_bytes{store="aggregate"}
NoteIf storage components other than HCP S Series Nodes are configured, aggregate totals aren't reported.
Message Queue

The Message Queue service supports a large number of general metrics. Information on these metrics is available at https://github.com/rabbitmq/rabbitmq-prometheus/blob/master/metrics.md.

Metadata Coordination

The following metrics are available from the Metadata Coordination service.

MetricDescription
mcs_copies_per_partitionGauge of the number of copies of each metadata partition per key space (to verify protection). Two copies means available but not fault tolerant; three copies means available and fault tolerant.
mcs_disk_usage_per_instanceGauge of the total disk usage of each metadata instance.
mcs_disk_usage_per_partitionGauge of the disk usage of each metadata partition per key space.
mcs_failed_moves_per_keyspaceCounter of the number of unsuccessful requests for metadata partition moves per keyspace.
mcs_failed_splits_per_keyspaceCounter of the number of unsuccessful requests for metadata partition splits per keyspace.
mcs_moves_per_keyspaceCounter of the number of successful requests for metadata partition moves per keyspace.
mcs_partitions_per_instanceGauge of the total number of metadata partitions per metadata instance. This is useful to verify balance and determine when scaling might be necessary.
mcs_splits_per_keyspaceCounter of the number of successful requests for metadata partition splits per keyspace.
Metadata Gateway

The following metrics are available from the Metadata Gateway service.

Note
  1. Client count metrics are an approximation and might not correspond to the actual count.
  2. Depending on when garbage collection tasks run, the ratio of client objects size to stored objects size might show a discrepancy.
MetricDescription
async_action_countThe count of actions performed.
async_action_latency_seconds_bucketA histogram for the duration, in seconds, of actions on buckets. For actions comprising multiple steps, this is the total of all steps.
async_action_latency_seconds_countThe count of action latency measurements taken.
async_action_latency_seconds_sumThe sum of action latency in seconds.
async_concurrencyA gauge for the number of concurrent actions.
async_duq_latency_seconds_bucketA histogram for the duration, in seconds, of operations on the durable update queue.
async_duq_latency_seconds_countThe count of durable update queue latency measurements.
async_getwork_database_countThe number of driver work checks accessing the database.
async_getwork_optimized_countThe number of driver work checks avoiding the database.
async_duq_latency_seconds_sumThe sum of actions on durable update queue in seconds.
metadata_available_capacity_bytesThe free bytes per instance (node) for the Metadata Gateway service. The label store is either the instance or aggregate.

Note: Because multiple service instances can run on a node, all consuming the same shared disk space, the value returned by this metric might be more than the actual capacity available.

metadata_clientobject_active_countThe count of client objects in metadata that are in the ACTIVE state.
metadata_clientobject_active_encrypted_​countThe count of encrypted client objects in metadata that are in the ACTIVE state.
metadata_clientobject_active_​unencrypted_​countThe count of unencrypted client objects in metadata that are in the ACTIVE state.
metadata_clientobject_and_part_active_​spacethe space occupied by client objects and parts in metadata that are in the ACTIVE state.
metadata_clientobject_part_active_countThe count of client object parts in metadata that are in the ACTIVE state.
metadata_storedObject_active_spaceThe space occupied by stored objects on the back-end storage components.
metadata_used_capacity_bytesThe used bytes per instance (node) for the Metadata Gateway service. The label store gives the domain name of the instance.

Note: Because multiple service instances can run on a node, all consuming the same shared disk space, combining this value with the value of metadata_available_capacity_​bytes won't give the total capacity of the service.

update_queue_inprogressThe count of update queue entries in progress.
update_queue_sizeThe size of the update queue.
Mirror In

The following metrics are available from the Mirror In service.

MetricDescription
mirror_failed_totalThe count of failed mirror operations, both whole objects and multipart uploads.

The mirror (synchronization) type is IN.

mirror_mpu_bytesThe number of bytes synchronized as part of multi-part uploads (using MultiPartUpload). This metric is updated as uploads proceed.

The mirror (synchronization) type is OUT.

mirror_mpu_errorsThe count of multi-part upload synchronization errors.

The mirror (synchronization) type is IN.

The client types are:

  • EXTERNAL_S3 - External metadata or storage
  • HCPCS - HCP for cloud scale metadata or storage component
  • TRANSFER - Policy Engine service

The error categories are:

  • AUTHENTICATION - unable to mirror sue to invalid credentials or permissions
  • METADATA - failure connecting to Metadata Gateway service
  • OPERATION_ABORTED - mirror operation canceled (MPU was canceled by external party)
  • RESOURCE_NOT_FOUND - object not found
  • S3 - failure to transfer data between source and target
  • SERVICE_UNAVAILABLE - service not available at time of request
  • GENERAL - uncategorized error
mirror_mpu_objectsThe count of objects synchronized using multi-part uploads (using MultiPartUpload).

The mirror (synchronization) type is IN.

mirror_skippedThe count of skipped mirror operations, on both whole objects and multi-part uploads.

The mirror (synchronization) type is IN.

mirror_success_totalThe count of objects successfully synchronized.

The mirror (synchronization) type is IN.

mirror_whole_bytes_totalThe number of bytes synchronized as whole objects (using PutObject).

The mirror (synchronization) type is IN.

mirror_whole_errors_totalThe count of non-multipart synchronization errors (using PutObject).

The mirror (synchronization) type is IN.

The client types are:

  • EXTERNAL_S3 - External metadata or storage
  • HCPCS - HCP for cloud scale metadata or storage component
  • TRANSFER - Policy Engine service

The error categories are:

  • AUTHENTICATION - unable to mirror sue to invalid credentials or permissions
  • METADATA - failure connecting to Metadata Gateway service
  • OPERATION_ABORTED - mirror operation canceled (MPU was canceled by external party)
  • RESOURCE_NOT_FOUND - object not found
  • S3 - failure to transfer data between source and target
  • SERVICE_UNAVAILABLE - service not available at time of request
  • GENERAL - uncategorized error
mirror_whole_objects_totalThe count of objects synchronized as whole objects (using PutObject).

The mirror (synchronization) type is IN.

s3_operation_count_totalThe count of S3 operations (READ, WRITE, DELETE, and HEAD) per storage component

The mirror (synchronization) type is IN.

sync_from_bytes_copiedThe number of bytes synchronized by full copy from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_from_bytes_putcopiedThe number of bytes synchronized by put-copy from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_from_object_count_failedThe count of objects that failed to synchronize from external storage (sync-from) by this instance, grouped by class of error. The error classes are AUTHENTICATION, METADATA, OPERATION_ABORTED, RESOURCE_NOT_FOUND, S3, SERVICE_UNAVAILABLE, and UNKNOWN.
sync_from_object_count_succeededThe count of objects synchronized from external storage (sync-from) by this instance.
sync_from_object_size_totalTotal size of object data synchronized from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_from_objectsTotal number of objects synchronized from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
Mirror Out

The following metrics are available from the Mirror Out service.

MetricDescription
mirror_failed_totalThe count of failed mirror operations, both whole objects and multi-part uploads.

The mirror (synchronization) type is OUT.

mirror_mpu_bytesThe number of bytes synchronized as part of multi-part uploads (using MultiPartUpload). This metric is updated as uploads proceed.

The mirror (synchronization) type is OUT.

mirror_mpu_errorsThe count of multi-part upload synchronization errors.

The mirror (synchronization) type is OUT.

The client types are:

  • EXTERNAL_S3 - External metadata or storage
  • HCPCS - HCP for cloud scale metadata or storage component
  • TRANSFER - Policy Engine service

The error categories are:

  • AUTHENTICATION - unable to mirror sue to invalid credentials or permissions
  • METADATA - failure connecting to Metadata Gateway service
  • OPERATION_ABORTED - mirror operation canceled (MPU was canceled by external party)
  • RESOURCE_NOT_FOUND - object not found
  • S3 - failure to transfer data between source and target
  • SERVICE_UNAVAILABLE - service not available at time of request
  • GENERAL - uncategorized error
mirror_mpu_objectsThe count of objects synchronized using multi-part uploads (using MultiPartUpload).

The mirror (synchronization) type is OUT.

mirror_skippedThe count of skipped mirror operations, on both whole objects and multi-part uploads.

The mirror (synchronization) type is OUT.

mirror_success_totalThe count of objects successfully synchronized.

The mirror (synchronization) type is OUT.

mirror_whole_bytes_totalThe number of bytes synchronized as whole objects (using PutObject).

The mirror (synchronization) type is OUT.

mirror_whole_errors_totalThe count of non-multipart synchronization errors (using PutObject).

The mirror (synchronization) type is OUT.

The client types are:

  • EXTERNAL_S3 - External metadata or storage
  • HCPCS - HCP for cloud scale metadata or storage component
  • TRANSFER - Policy Engine service

The error categories are:

  • AUTHENTICATION - unable to mirror sue to invalid credentials or permissions
  • METADATA - failure connecting to Metadata Gateway service
  • OPERATION_ABORTED - mirror operation canceled (MPU was canceled by external party)
  • RESOURCE_NOT_FOUND - object not found
  • S3 - failure to transfer data between source and target
  • SERVICE_UNAVAILABLE - service not available at time of request
  • GENERAL - uncategorized error
mirror_whole_objects_totalThe count of objects synchronized as whole objects (using PutObject).

The mirror (synchronization) type is OUT.

s3_operation_count_totalThe count of S3 operations (READ, WRITE, DELETE, and HEAD) per storage component

The mirror (synchronization) type is IN.

sync_to_bytes_copiedThe number of bytes synchronized by full copy to external storage (sync-to) by this instance. This metric is updated as synchronization proceeds.
sync_to_bytes_putcopiedThe number of bytes synchronized by put-copy (previously copied) to external storage (sync-to) by this instance.
sync_to_objectsThe count of objects synchronized to external storage (sync-to) by this instance.
sync_to_object_size_totalThe total size of object data synchronized to external storage (sync-to) by this instance. This metric is updated as synchronization proceeds.
Policy Engine

The following metrics are available from the Policy Engine service.

MetricDescription
confirm_latency_seconds_createdThe message queue publish confirmation latency in seconds.
duq_query_latencyThe time to get a response from a get_duq query.
duq_query_latency_countThe number of times the durable update queue (DUQ) has been queried (for determining the average).
duq_query_latency_sumThe aggregate sum of latencies for DUQ queries (for determining the average).
mq_all_bucket_lookup_latency_secondsAverage latency from a lookup of all buckets.
mq_all_mirror_count_totalThe count of messages dispatched to mirror exchange.
mq_all_mirror_drop_count_totalThe count of messages filtered from mirror exchange.
mq_all_notification_count_totalThe count of messages dispatched to notification exchange.
mq_all_notification_drop_count_totalThe count of messages filtered from notification exchange.
mq_queued_messages

Gauge of the queue depth (number of messages) that are being processed, or waiting to be processed, in these product queues:

  • s3.all - Messages resulting from all S3 operations.
  • s3.mirroringEvents - Messages for objects that require mirroring out to an external bucket. This is the Sync-To backlog.
  • s3.mirrorTransfer - Messages for objects that require mirroring in from an external bucket. This is the Sync-From backlog. Note: This queue is limited to 1 million entries. If the queue fills reading from SQS pauses. There might be additional backlog in SQS.
  • s3.notificationEvents - Messages for objects that require S3 notification to external entities. This is the S3 External Notification backlog.
  • lifecycle.chargeback - Messages that define tasks for aggregating chargeback data. This is the chargeback lifecycle policy task backlog.
  • lifecycle.delete-backend - Messages that define tasks for reclaiming space from storage components. This is the delete backend object lifecycle policy task backlog.
  • lifecycle.expire-mpu - Messages that define tasks for expiring multipart uploads. Expiration is defined using a bucket lifecycle policy. This is the expire in-progress MPU lifecycle policy task backlog.
  • lifecycle.client-object-policy - Messages that define tasks for client object policies, including version expiration, delete marker expiration, and tombstone expiration. Expiration for versions and delete markers are defined using a bucket lifecycle policy. This is the client object expiration lifecycle policy task backlog.
  • lifecycle.mirror-table-maintenance - This is the mirror tracking table lifecycle policy task backlog.

Note: A task represents a range of objects. Each range can have many thousands of objects.

policy_engine_errors_totalCount of how many errors per error type per instance.
policy_engine_operations_totalCount of how many time a policy ran per policy type per instance (similar to http_s3_servlet_operations_total). Operations include both asynchronous and scheduled operations, such as sync_to, sync_from, and sched_storage_component_healthchecks_examined.
policy_engine_time_totalTotal time spent processing requests per instance. This helps measure load balancing between instances of the Policy Engine service.
sync_from_bytesThe number of bytes synchronized from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_from_bytes_copiedThe number of bytes synchronized by full copy from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_from_bytes_putcopiedThe number of bytes synchronized by put-copy from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_from_objectsTotal number of objects synchronized from external storage (sync-from) by this instance. This metric is updated as synchronization proceeds.
sync_to_bytesThe number of bytes synchronized to external storage (sync-to) by this instance. This metric is updated as synchronization proceeds.
sync_to_bytes_copiedThe number of bytes synchronized by full copy to external storage (sync-to) by this instance. This metric is updated as synchronization proceeds.
sync_to_bytes_putcopiedThe number of bytes synchronized by put-copy (previously copied) to external storage (sync-to) by this instance.
sync_to_object_count_failedThe count of objects that failed to synchronize to external storage (sync-to) by this instance, grouped by class of error. The error classes are AUTHENTICATION, METADATA, OPERATION_ABORTED, RESOURCE_NOT_FOUND, S3, SERVICE_UNAVAILABLE, and UNKNOWN.
sync_to_object_count_succeededThe count of objects synchronized to external storage (sync-to) by this instance.
sync_to_objectsThe count of objects synchronized to external storage (sync-to) by this instance.
RabbitMQ

RabbitMQ is a third-party application that is used by HCP for cloud scale to coordinate tasks submitted to the Policy Engine service for asynchronous processing. You can log in to the RabbitMQ interface to observe queue health. The following metrics are available from RabbitMQ:

  • The number of messages in the queue
  • The number of confirmed messages
  • The number of unconfirmed (unacknowledged) messages
  • The number of consumed (delivered and acknowledged) messages
  • The number of unroutable returned messages
  • The number of nodes in the RabbitMQ cluster
S3 Gateway

The following metrics are available from the S3 Gateway service.

MetricDescription
http_s3_monitoring_requests_createdThe timestamp when the counter http_s3_monitoring_requests_total was created.
http_s3_monitoring_requests_totalThe total count of S3 monitoring requests.
http_s3_servlet_errors_totalThe total number of errors returned by the s3 servlet, grouped by error.
http_s3_servlet_get_object_response_​bytes_createdThe timestamp when the counter http_s3_servlet_get_object_response_​bytes_total was created.
http_s3_servlet_get_object_response_​bytes_per_bucket_createdThe timestamp when the counter http_s3_servlet_get_object_response_​bytes_per_bucket_total was created.
http_s3_servlet_get_object_response_​bytes_per_bucket_totalThe total number of total bytes in the body of S3 GET object responses per bucket.
http_s3_servlet_get_object_response_​bytes_totalThe total number of bytes in the body of S3 GET object responses.
http_s3_servlet_ingest_object_bytes_per_​bucket_createdThe timestamp when the counter http_s3_servlet_ingest_object_bytes_per_​bucket_total was created.
http_s3_servlet_ingest_object_bytes_per_​bucket_totalThe total count of objects ingested for the specified bucket.
http_s3_servlet_operations_createdThe timestamp when the counter http_s3_servlet_operations_total was created.
http_s3_servlet_operations_totalThe total number of S3 operations made to the s3 servlet for each method, grouped by operation.
http_s3_servlet_post_object_bytes_createdThe timestamp when the counter http_s3_servlet_post_object_bytes_total was created.
http_s3_servlet_post_object_bytes_totalThe total number of bytes of objects posted to S3.
http_s3_servlet_put_copied_bytes_totalThe number of total bytes of objects PUT copied (previously copied) to S3.
http_s3_servlet_put_object_bytes_createdThe timestamp when the counter http_s3_servlet_put_object_bytes_total was created.
http_s3_servlet_put_object_bytes_totalThe number of total bytes of objects PUT (previously copied) to S3.
http_s3_servlet_put_object_part_bytes_​totalThe number of total bytes of PUT part operations (previously copied) to S3.
http_s3_servlet_requests_histogram_​latency_secondsThe latency in seconds as measured by a histogram timer, grouped by operation.
http_s3_servlet_requests_histogram_​latency_​seconds_bucketThe latency in seconds as measured by a histogram timer, grouped by bucket.
http_s3_servlet_requests_histogram_​latency_​seconds_countThe count of s3 servlet request observations; used with sum to determine average.
http_s3_servlet_requests_histogram_​latency_​seconds_sumSum of s3 servlet request latency in seconds; used with count to determine average.
http_s3_servlet_requests_latency_secondsThe latency in seconds as measured by a summary timer, grouped by operation.
http_s3_servlet_requests_latency_seconds:hour_averageThe latency in seconds over the last hour as measured by a summary timer.
http_s3_servlet_requests_latency_seconds_count
http_s3_servlet_requests_latency_seconds_sumThe sum of request latency in seconds.
http_s3_servlet_requests_per_bucket_​createdThe timestamp when the counter http_s3_servlet_requests_per_bucket_total was created.
http_s3_servlet_requests_per_bucket_totalThe total count of total put, get, or deletion requests made to the specified bucket.
http_s3_servlet_requests_createdThe timestamp when the counter http_s3_servlet_requests_total was created.
http_s3_servlet_requests_totalThe total number of requests made to the s3 servlet, grouped by method.
http_s3_servlet_unimplemented_api_​request_createdThe timestamp when the counter http_s3_servlet_unimplemented_api_​request_total was created.
http_s3_servlet_unimplemented_api_​request_totalThe total number of requests made for unimplemented S3 methods.
http_s3_servlet_unimplemented_bucket_​api_​request_totalThe total number of requests made for unimplemented S3 methods per bucket, grouped by API.
http_s3_servlet_unimplemented_object_​api_request_totalThe total number of requests made for unimplemented S3 methods per object, grouped by API.
http_s3_servlet_unimplemented_service_​api_request_totalThe total number of requests made for unimplemented S3 methods per service, grouped by API.
http_s3_servlet_unknown_api_requests_​totalThe total number of requests made for unknown S3 methods, grouped by API.
s3_operation_error_countThe count of failed S3 operations (READ, WRITE, DELETE, and HEAD) per storage component
s3_operation_latency_secondsThe latency of storage component operations (READ, WRITE, DELETE, and HEAD) in seconds
s3select_total_bytes_scannedThe number of bytes scanned in the object
s3select_total_bytes_processedThe number of bytes processed by the request
s3select_total_bytes_returnedThe number of bytes returned from the request
s3select_input_typeCount of requests by file type
s3select_output_typeCount of responses by file type
S3 Notification

The following metrics are available from the S3 Notification service.

MetricDescription
mq_publish_latency_secondsThe message queue publishing latency in seconds.
notification_events_considered_totalThe count of events considered that could lead to notifications.
notification_events_notification_attempted_​totalThe count of events that had at least one notification message attempted.
notification_message_failures_totalThe count of notification messages that were attempted but failed.
notification_message_parsing_failures_totalThe count of candidate object events that could not be parsed.
notification_messages_sent_totalThe count of notification messages that were successfully sent.
notification_message_target_generation_​failures_totalThe count of candidate objects for which a list of notification targets could not be generated.

Examples of metric expressions

By using metrics in formulas, you can generate useful information about the behavior and performance of the HCP for cloud scale system.

Available capacity

The following expression graphs the total capacity of the storage component store54.company.com over time. Information is returned for HCP S Series Node storage components only. The output includes the label store, which identifies the storage component by domain name. The system collects data every five minutes.

storage_total_capacity_bytes{store="store54.company.com"}

The following expression graphs the used capacity of all HCP S Series Node storage components in the system over time. (This is similar to the information displayed on the Storage page.) Information is returned only if all storage components in the system are HCP S Series nodes. The output includes the label aggregate. The system collects data every five minutes.

storage_used_capacity_bytes(store="aggregate"}
Growth of active-object count

The following expression graphs the count of active objects (metadata_clientobject_active_count) over time. (This is similar to the graph displayed on the Storage page.) You can use this formula to determine the growth in the number of active objects.

sum(metadata_clientobject_active_count)
Monitoring deletion activities

The metric lifecycle_policy_deleted_backend_objects_count gives the total number of backend objects, including object versions, deleted by the policy DELETE_BACKEND_OBJECTS. You can graph this metric over time to monitor the rate of object deletion. In addition, the following expression graphs the count of deletion activities by the policy.

sum(lifecycle_policy_completed{policy="DELETE_BACKEND_OBJECTS"})
Sum of update queues

The following expression graphs the size of all update queues. You can use this formula to determine whether the system is keeping up with internal events that are processed asynchronously in response to S3 activity. If this graph increases over time, you might want to increase capacity.

sum(update_queue_size)
Changes in S3 put requests over time

The following expression graphs the count of S3 put requests, summed across all nodes, at one-minute intervals. If you remove the specifier {operation="S3PutObjectOperation"} the expression graphs all S3 requests.

sum(rate(http_s3_servlet_operations_total{operation="S3PutObjectOperation"}[1m]))
Request time service levels

The following expression divides the latency of requests (async_duq_latency_seconds_bucket) in seconds by the number of requests (async_duq_latency_seconds_count), for the bucket getWork and requests less than or equal to 10 ms, and graphs it over time. You can use this formula to determine the percentage of requests completed in a given amount of time.

sum(rate(async_duq_latency_seconds_bucket{op="getWork",le="0.01"}[1m]))/
sum(rate(async_duq_latency_seconds_count{op="getWork"}[1m]))

Here is a sample graph of data from a lightly loaded system:

Prometheus graph of sample service-level data from a lightly loaded system
Request time quantile estimates

The following expression estimates the quantile for the latency of requests (async_duq_latency_seconds_bucket) in seconds for the bucket getWork. You can use this formula to estimate the percentage of requests completed in a given amount of time.

histogram_quantile(.9, sum(rate(async_duq_latency_seconds_bucket{op="getWork"}[1m])) by (le))

Here is a sample graph of data from a lightly loaded system:

Prometheus graph of sample quantile data from a lightly loaded system

Starting the dashboards

The dashboards are generated from metrics collected by HCP for cloud scale and displayed using a third-party, open-source package. For more information about the interface, go to: https://grafana.com/docs/grafana/latest/dashboards/.

Procedure

  1. There are two ways to start the dashboards:

    • Click the app switcher menu (App Switcher menu (a nine-dot square) lets you select another application) and then select Grafana.
    • Open a browser window and go to https://cluster_name:3000/login
    You are prompted for login information.
  2. Enter the following initial credentials:

    1. Username: admin

    2. Password: admin

    You are prompted to create a new password for subsequent logins.
  3. Keep or change the password.

    It's best to change the password on first login and retain it securely.The dashboards open.

Results

The dashboards are available.

Tracing requests and responses

HCP for cloud scale uses an open-source software tool, running over HTTPS as a service, for service tracing through a browser.

The Tracing service supports end-to-end, distributed tracing of S3 requests and responses by HCP for cloud scale services. Tracing helps you monitor performance and troubleshoot possible issues.

Tracing involves three service instances:

  • Tracing Query: serves traces
  • Tracing Agent: receives spans from tracers
  • Tracing Collector: receives spans from Tracing Agent service using Tchannel

Displaying traces

You can display traces using the tracing service GUI.

To begin tracing, click the app switcher menu (App Switcher menu (a nine-dot square) lets you select another application) and then select Jaeger. The tracing tool opens in a separate browser window.

When tracing, you can specify:

  • Service to trace
  • Operation to trace (all or specific) for each service
  • Tags
  • Lookback period (by default, over the last hour)
  • Minimum duration
  • Number of results to display (by default, 20)

The service displays all found traces with a chart giving the time duration for each trace. You can select a trace to display how the trace is served by difference services in cascade and the time spent on each service.

For information about the tracing tool, see the documentation provided with the tool.

Traceable operations

The following operations are traceable.

ComponentOperation
async-policy-engineAction Pipeline Action: BucketIdToNameMapAction
Action Pipeline Action: BucketLookupForAsyncPolicyAction
Action Pipeline Action: BucketOwnerIdToNameMapAction
Action Pipeline Action: BucketUpdateSecondaryAction
Action Pipeline Action: ClientObjectDispatchRemoveBack​ReferencesAction
Action Pipeline Action: ClientObjectLookupAction
Action Pipeline Action: ClientObjectModifyInProgressListAction
Action Pipeline Action: ClientObjectModifyListAction
Action Pipeline Action: ClientObjectUpdateSecondaryAction
Action Pipeline Action: DequeueAction
Action Pipeline Action: MetadataAction
BUCKET
CLIENT_OBJECT
STORED_OBJECT_BACK_REFERENCE
balance-engineBalanceCluster
BalanceEngineOperation
controlApi.ControlApiService
RefreshCluster
client-access-serviceAction Pipeline Action: BucketAuthorizationAction
Action Pipeline Action: BucketCountLimitAction
Action Pipeline Action: BucketCreateAction
Action Pipeline Action: BucketRegionValidationAction
Action Pipeline Action: BucketUpdateAclAction
Action Pipeline Action: ClientObjectInitiateMultipartAction
Action Pipeline Action: ClientObjectListInProgressMultipartAction
Action Pipeline Action: ClientObjectListVersionsAction
Action Pipeline Action: ClientObjectSizeLimitAction
Action Pipeline Action: ClientObjectTableLookupAction
Action Pipeline Action: ClientObjectUpdateAclAction
Action Pipeline Action: CompleteMultipartUploadAction
Action Pipeline Action: DataContentAction
Action Pipeline Action: DataDeletionAction
Action Pipeline Action: NotAnonymousAuthorizationAction
Action Pipeline Action: ObjectAuthorizationAction
Action Pipeline Action: ObjectDataPlacementAction
Action Pipeline Action: ObjectGetCurrentExpirationAction
Action Pipeline Action: ObjectGetMultipartAbortDateAction
Action Pipeline Action: ObjectGetUndeterminedExpirationAction
Action Pipeline Action: ObjectLookupAction
Action Pipeline Action: PartDataPlacementAction
Action Pipeline Action: PutAclAction
Action Pipeline Action: RequestBucketLookupAction
Action Pipeline Action: RequestVersionIdValidationAction
Action Pipeline Action: UploadIdValidationAction
Action Pipeline Action: UserLookupBucketsAction
Action Pipeline Action: VersionIdNotEmptyValidationAction
expiration-rules-engineEvaluateOperation
foundry-auth-clientFoundryAuthorizeOperation
FoundryValidateOperation
jaeger-query/api/dependencies
/api/services
/api/services/{service}/operations
/api/traces
mapi-serviceGET
POST
metadata-clientBucketService/Create
BucketService/List
BucketService/ListBucketOwnerListing
BucketService/LookupBucketNameById
BucketService/LookupByName
BucketService/UpdateACL
ClientObjectService/CloseNew
ClientObjectService/ClosePart
ClientObjectService/DeleteSpecific
ClientObjectService/List
ClientObjectService/LookupLatest
ClientObjectService/LookupSpecific
ClientObjectService/OpenNew
ClientObjectService/OpenPart
ClientObjectService/setACLOnLatest
ClientObjectService/Delete
ConfigService/List
ConfigService/LookupById
ConfigService/Set
StoredObjectService/Close
StoredObjectService/Delete
StoredObjectService/List
StoredObjectService/Lookup
StoredObjectService/MarkForCleanup
StoredObjectService/Open
UpdateQueueService/SecondaryEnqueue
UserService/LookupById
UserService/LookupOrCreate
UserService/UpdateAddAuthToken
metadata-coordination-serviceStatus.Service/GetStatus
metadata-gateway-serviceStatus.Service/GetStatus
BucketService/Create
BucketService/List
BucketService/ListBucketOwnerListing
BucketService/LookupBucketNameById
BucketService/LookupByName
BucketService/UpdateACL
ClientObjectService/CloseNew
ClientObjectService/ClosePart
ClientObjectService/DeleteSpecific
ClientObjectService/List
ClientObjectService/LookupLatest
ClientObjectService/LookupSpecific
ClientObjectService/OpenNew
ClientObjectService/OpenPart
ClientObjectService/setACLOnLatest
ConfigService/Delete
ConfigService/List
ConfigService/LookupById
ConfigService/Set
StoredObjectService/Close
StoredObjectService/Delete
StoredObjectService/List
StoredObjectService/Lookup
StoredObjectService/MarkForCleanup
StoredObjectService/Open
UpdateQueueService/SecondaryEnqueue
UserService/LookupById
UserService/LookupOrCreate
UserService/UpdateAddAuthToken
metadata-policy-clientPolicyService/ExecutePolicy
metadata-policy-serviceServiceStatus/GetStatus
PolicyService/ExecutePolicy
ScheduledDeleteBackendObjectsJob
ScheduledDeleteFailedWritesJob
ScheduledExpirationJob
ScheduledIncompleteMultipart​Expiration​Job
storage-component-clientInMemoryStorageComponent​Verify​Operation
InMemoryStorageDeleteOperation
InMemoryStorageReadOperation
InMemoryStorageWriteOperation
storage-component-managerStorageComponentManager Operation: Create
StorageComponentManager Operation: List
StorageComponentManager Operation: Lookup
StorageComponentManager Operation: Update
tomcat-servletS3 Operation