Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Hitachi Content Platform for Cloud Scale v2.4.5 Release Notes

About this document

This document gives late-breaking information about HCP for cloud scale v2.4.5. It includes information that was not available at the time the technical documentation for this product was published, a list of new features, a list of resolved issues, and a list of known issues and where applicable their workarounds.

Intended audience

This document is intended for customers and Hitachi Vantara partners who license and use HCP for cloud scale.

Getting help

The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information: https://support.hitachivantara.com/en_us/contact-us.html.

Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.

About this release

This is build 2.4.5.1 of the Hitachi Content Platform for cloud scale (HCP for cloud scale) software.

Major features

HCP for cloud scale is a software-defined object storage solution that is based on a massively parallel microservice architecture and is compatible with the Amazon Simple Storage Service (Amazon S3) application programming interface (API). HCP for cloud scale is especially well suited to service applications requiring high bandwidth and compatibility with the Amazon S3 API.

Features in v2.4.5

New features

The HCP for cloud scale v2.4.5 release includes the following new features:

  • The Metadata Gateway service now provides synchronous information on the current state of partitions.
Resolved issues

The HCP for cloud scale v2.4.5 release resolves the following issues:

  • The Metadata Gateway service can fail to start after a node restarts.
  • The Metadata Gateway service can experience out-of-memory errors under heavy deletion workload.

Best practices for managing HCP S Series Nodes

For a system that includes HCP S Series Node storage components, the best practice is to define a separate user account on them for exclusive use by HCP for cloud scale.

If you have already configured HCP for cloud scale to use an account from an HCP S Series Node that is typically used for its management (that is, an administrative account), and then change the account’s credentials for security purposes, then HCP for cloud scale cannot communicate with the HCP S Series Node, most importantly for data access, which leads to a disruption of service.

You set up accounts when configuring a new storage component. If you have already defined storage components with single accounts, the best practice is to add another, exclusive account on each HCP S Series Node and change the HCP for cloud scale configuration to use that account.

NoteReplacing an account for an existing storage component might cause a data path disruption of a few seconds as the account is defined.

System requirements

This section lists the hardware, networking, and operating system requirements for running an HCP for cloud scale system with one or more instances.

Hardware requirements

To install HCP for cloud scale on on-premises hardware for production use, you must provision at least four instances (nodes) with sufficient CPU, RAM, disk space, and networking capabilities. This table shows the hardware resources required for each instance of an HCP for cloud scale system for a minimum qualified configuration and a standard qualified configuration.

Resource

Minimum configuration

Standard configuration

CPU

Single CPU, 10-core

Dual CPU, 20+-core

RAM

128 GB

256 GB

Available disk space

(4) 1.92 TB SSD, RAID10

(8) 1.92 TB SSD, RAID10

Network interface controller (NIC)(2) 10 Gb Ethernet NICs(2) 25 Gb Ethernet NICs or

(4) 10 GB Ethernet NICs

ImportantEach instance uses all available RAM and CPU resources on the server or virtual machine on which it's installed.

Software requirements

The following table shows the minimum requirements and best-practice software configurations for each instance in an HCP for cloud scale system.

ResourceMinimumBest
IP addresses(1) static(2) static
Firewall Port AccessPort 443 for SSL traffic

Port 8000 for System Management App GUI

Port 8888 for Content Search App GUI

Same
Network TimeIP address of time service (NTP)Same

Operating system and Docker minimum requirements

Each server or virtual machine you provide must have the following:

  • 64-bit Linux distribution
  • Docker version installed: Docker Community Edition 18.09.0 or later
  • IP and DNS addresses configured

Additionally, you should install all relevant patches on the operating system and perform appropriate security hardening tasks.

ImportantThe system cannot run with Docker versions before 1.13.1.

To execute scripts provided with the product on RHEL, you should install Python.

Operating system and Docker qualified versions

This table shows the operating system, Docker, and SELinux configurations with which the HCP for cloud scale system has been qualified.

ImportantAn issue in Docker Enterprise Edition 19.03.15 and resolved in 20.10.5 prevented HCP for cloud scale deployment. Do not install any version of Docker Enterprise Edition above 19.03.14 and below 20.10.5.
Operating systemDocker versionDocker storage configurationSELinux setting
Red Hat Enterprise Linux 8.4Docker Community Edition 19.03.12 or lateroverlay2Enforcing

If you are installing on Amazon Linux, before deployment, edit the file /etc/security/limits.conf on every node to add the following two lines:

*  hard  nofile  65535
*  soft  nofile  65535

Docker considerations

The Docker installation folder on each instance must have at least 20 GB available for storing the HCP for cloud scale Docker images.

Make sure that the Docker storage driver is configured correctly on each instance before installing HCP for cloud scale. To view the current Docker storage driver on an instance, run docker info.

NoteAfter installation, changing the Docker storage driver requires a reinstallation of HCP for cloud scale.

If you are using the Docker devicemapper storage driver:

  • Make sure that there's at least 40 GB of Docker metadata storage space available on each instance. HCP for cloud scale needs 20 GB to install successfully and an additional 20 GB to successfully upgrade to a later version. To view Docker metadata storage usage on an instance, run docker info.
  • On a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, HCP for cloud scale might not have enough space to run.

SELinux considerations

You should decide whether you want to run SELinux on system instances and enable or disable it before installing HCP for cloud scale. To enable or disable SELinux on an instance, you must restart the instance. To view whether SELinux is enabled on an instance, run: sestatus

To enable SELinux on the system instances, use a Docker storage driver that supports it. The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.

Time source requirements

If you are installing a multi-instance system, each instance should run NTP (network time protocol) and use the same external time source. For information, see support.ntp.org.

Supported browsers

The following browsers are qualified for use with HCP for cloud scale software. Other browsers or versions might also work.

  • Google Chrome (latest version as of the date of this publication)
  • Microsoft Edge (latest version as of the date of this publication)
  • Mozilla Firefox (latest version as of the date of this publication)

Installation or upgrade considerations

This section provides information about installing or upgrading HCP for cloud scale software.

Upgrades from versions before v2.3.0

Upgrades from version v2.2.1 or earlier directly to v2.4.5 are not supported. An upgrade pre-check examines the software version, and if the version is v2.2.1 or earlier the upgrade does not proceed. If your HCP for cloud scale software is at version 2.2.1 or earlier, you must upgrade at least to version 2.3.0 (version 2.3.4 is recommended) before you can upgrade to version 2.4.5.

Best practices for system sizing

The best practice for sizing systems for common use cases and distributing product services across instances has been revised.

For information on system sizing and distributing services, refer to Online help, Administration Guide, which updates the information in the online help and the Administration Guide.

Upgrading systems with synch-to policies

In v2.4.5, only synch-to policies with single destinations are supported.

An upgrade pre-check examines all buckets with synch-to policies. If a bucket has a synch-to policy with multiple destinations the upgrade does not proceed and the owner of the bucket is notified.

Upgrading from v2.3.x to v2.4.5

Before beginning an upgrade, increase the Max Heap Size value for the Metadata-Gateway service by about one third. For example, if the Max Heap Size for Metadata-Gateway is 48000m (48GB), then change it to 64000m (64GB), a 33% increase. To increase the value:

  1. In the System Management application, select Dashboard > Services.
  2. Select Metadata-Gateway.
  3. On the Configuration tab, increase the Max Heap Size value as needed.
  4. Click Update.

Before beginning an upgrade, ensure that the heap size for the Sentinel service is 8 GiB (which you enter as 8g or 8192m). If the value is too small, upgrade pre-check fails and reports a Sentinel service error.

Before beginning an upgrade, ensure that the Policy Engine service is scaled to three instances. Otherwise, the upgrade fails and reports the error "Service Policy-Engine is underprotected." If you see this error scale the Policy Engine service up to three instances and restart the upgrade to v2.4.5.

If you very recently upgraded to HCP for cloud scale v2.3.x, your system might still be undergoing table migration for consistent listing, which was a major feature in the v2.3 upgrade. Until this migration has completed, you cannot upgrade to v2.4.5 (or any later version). The upgrade software checks the migration state and will not start while migration is ongoing. In this case you must wait for migration to finish and then restart the upgrade to v2.4.5. Depending on the number of objects in the system, table migration can take anywhere from a day to about a week.

You can monitor the progress of migration using the metric deprecated_metadata_clientobject_active_count in Prometheus. This metric gives the count of objects in deprecated partitions. It decrements during the migration, reaching 0 when all objects are migrated. You can use this metric to estimate of the time remaining, though even after the value reaches 0 there will still be some cleanup.

Upgrade if DARE is enabled

If data-at-rest encryption (DARE) is enabled, during an upgrade you must monitor the process, detect when the Key Management Server service restarts, and then unseal the vault. S3 traffic is blocked until the vault is unsealed.

The KMS service is the last service restarted, and you can monitor the progress in the System Management application. An event is logged when the service restarts (8007, Update Completed) and you can configure email notification.

As soon as the KMS service is upgraded, unseal the vault. (You can keep the unseal keys at hand to enter them immediately.)

After upgrade is completed, users see new behaviors:

  • S3 API GET and HEAD responses include an --x-amz-server-side-encryption header with all existing encrypted objects.
  • All existing S3 buckets reflect an active encryption bucket policy, which is verifiable using the GetBucketEncryption API.
  • No S3 user can read (GET) the SSE-S3 bucket policy unless the administrator explicitly adds that permission to the S3 user role.
  • No S3 user can delete the SSE-S3 bucket policy unless the administrator explicitly adds that permission to the S3 user role.
  • No S3 user can add the SSE-S3 bucket policy to new buckets unless the administrator explicitly adds that permission to the S3 user role.
  • All S3 users can electively include --x-amz-server-side-encryption headers for any S3 creation operation (PUT, COPY, or Initiate MPU)
  • Encryption and decryption can continue until the Key-Management-Server service is restarted. After the service restarts, the key management server is consulted and must be unsealed to recommence encryption and decryption.

Resolved issues

The following HCP for cloud scale issues are resolved in this release.

Object storage management

The following table lists resolved issues affecting object storage management.

IssueArea affectedDescription
ASP-13295Metadata GatewayMetadata Gateway fails to start

Under some circumstances the Metadata Gateway fails to restart when a node restarts.

Resolution

This issue is resolved.

ASP-13176Metadata GatewayOut-of-memory condition under heavy deletion workload

Under heavy deletion workload, the Metadata Gateway can experience timeouts or out-of-memory errors.

Resolution

This issue is resolved. An incorrect memory calculation is corrected.

Known issues

The following issues with HCP for cloud scale have been identified in this release.

Object storage management

The following table lists known issues affecting object storage management.

IssueArea affectedDescription
ASP-2422Tracing AgentIncorrect alert message during manual deployment

When manually deploying a four-node, multi-instance system, the Tracing Agent service returns an alert that the service is below the needed instance count even when the correct number of service instances are deployed.

Workaround

If you have deployed the correct number of instances you can safely ignore this alert.

ASP-3119MAPI GatewayBlocked thread on authorization timeout

Authentication and authorization use a system management authorization client service which has a different timeout interval. If a management API authorization or authentication request times out but the underlying client service doesn't, the thread is blocked.

Workaround

Stop and restart the MAPI Gateway service container.

ASP-3170MAPI GatewayCertain API methods are public

The MAPI schema includes public API methods, which do not need OAuth tokens.

Workaround

None needed. The public API methods do not need OAuth tokens.

ASP-11259S3 APINo eTag validation on parts of mirrored multi-part uploads

MD5/eTag validation is not performed on mirrorUploadPart operations.

Workaround

Use transport-layer security (TLS) between endpoints for mirrored operations.

System management

The following table lists known issues affecting system management.

IssueArea affectedDescription
ASP-3379ConfigurationCannot set refresh token timeout value

The Refresh Token Timeout configuration value in the System Management application (Configuration > Security > Settings) has no effect.

ASP-9433System updateUpgrade can fail and cannot be completed

Some conditions that cause an upgrade to fail the pre-check (for example, if the system was improperly prepared for upgrade) render the upgrade unable to complete even if the underlying issue is fixed. Subsequent attempts fail with the message "The upgrade has failed! Click retry to attempt the update again." The detail message is "Wait for service tracing-Query on any instance to start," which is misleading because the specified service is not the cause of the error.

Workaround

If retrying the upgrade fails even after the underlying cause is fixed, contact the Hitachi Vantara Global Support Center for assistance with preparing and completing the upgrade.

ASP-11695System updateUpgrade to v2.4 hangs if multiple buckets have synch-to policies with multiple destinations

The v2.4 upgrade pre-check detects if a user bucket has a synch-to policy with multiple destinations, which is not supported in v2.4, and the upgrade does not proceed. However, if multiple buckets have synch-to policies with multiple destinations, the upgrade pre-check can hang in a checking loop.

If this happens, in the System Management application, on the Update page, the Status tab shows the status "Running_prechecks" but the Install tab displays "Update cluster - Error" with 22 steps completed.

In this state the View Details and Retry buttons are available only momentarily during each checking cycle.

Workaround

  1. Determine which buckets violate the pre-check conditions. In the System Management application, do one of the following:
    • If a browser has a high-latency connection to the HCP for cloud scale system, the View Details button might be visible long enough to click it. This option is preferable because the details are more readable and list which buckets need correction, and because once details are displayed the Retry button becomes available, which starts another update attempt without the need to locate and stop the Sentinel service.
    • If the first option is not available, check the Sentinel service to determine the node on which the service instance is running. Examine the Sentinel service log.
  2. If examining logs, look for an update error of the form:
    'com.hds.ensemble.plugin.update.UpdateOperationFailedException: Mirror configuration for the following buckets 
    contained external configurations: [bucket1, bucket2, bucket3]' error
  3. Reconfigure the synch-from policies on the buckets.
  4. If the Retry button is available, click it now; otherwise, enter the following command to stop the Sentinel service on the node, which triggers the service to restart:
    /docker stop sentinel-service
FNDO-373VolumesVolume configuration is not displayed correctly in System Management application

During installation, you can configure volumes for system services by specifying different values in the volume.config file on each system instance. Each volume is correctly configured with the settings you specify, but the Monitoring > Services > Service Details page in the System Management application incorrectly shows each volume as having identical configurations.

FNDO-512System updateNetwork types cannot be configured for new services before system update

Before starting an update, you are prompted to specify the network configuration for any new services included in the version that you're updating to. However, you can specify only the port numbers for the new service. You cannot specify the network type (that is, internal or external) for the service to use. Each new service gets the default network type, which is determined by the service itself.

FNDO-758MAPIIf IdP is unavailable, threads blocked

HCP for cloud scale uses a System Management function to validate tokens. The function does not time out. If the identity provider is unavailable, the requesting thread is blocked.

FNDO-931UpdatesUpdate volume prechecks not performed

Validation of volume configuration values is not honored by the upgrade process. As a result, configuration values passed to Docker are not valid.

Workaround

Use caution when specifying volume values.

FNDO-1029System updateUploading an update package fails after the failure and recovery of a system instance

If a system instance enters the state Down, when you try to upload an update package the upload fails. However, after the system instance recovers, when you try again to upload an update package, the upload fails again even though the system is in a healthy state.

Workaround

  1. In the System Management application, go to the Monitoring > Processes page and for the Upload Plugin Bundle task click Retry Task.
  2. Upload the update package again.
FNDO-1062Service deploymentDatabase service fails to deploy

The Cassandra service can fail to deploy with the error Could not contact node over JMX. The log file on the node running the service instance includes the following entry: java.lang.RuntimeException: A node required to move the data consistently is down (/nnn.nnn.nnn.nnn). If you wish to move the data from a potentially inconsistent replica, restart the node with -Dcassandra.consistent.rangemovement=false

Workaround

  1. Restart the Cassandra container running on that node.
  2. Redeploy the service.

S3 Console

The following table lists known issues affecting the S3 Console application.

IssueArea affectedDescription
ASP-13224S3 ConsoleMisleading message if KEK cannot be retrieved

If a user tries to download a file on a storage component whose key encryption key cannot be retrieved, the S3 Console displays a page with the message "This page isn't working" and an HTTP 503 (service unavailable) error. (In some browsers the page is entirely blank.)

Workaround

Investigate and resolve the missing-key issue.

ASP-11600S3 ConsoleEnabling object lock without permission fails with briefly displayed error message

If a user without permission to set object locks creates a bucket and tries to enable object locking, the bucket is created and object locking is not enabled, and a message briefly appears indicating that object locking was not enabled.

Workaround

Users need specific permissions to set object locking on their bucket. In the System Management application, select Configuration Security Roles and set the Data Service permissions data:bucket:objectlock:get and data:bucket:objectlock:set to Yes for the user's role.

If a user needs to configure both bucket object lock and bucket expiration lifecycle rules, assign the following permissions for the user's role:

  • data:bucket:objectlock:get
  • data:bucket:objectlock:set
  • data:bucket:expirationlifecycle:get
  • data:bucket:expirationlifecycle:set

Related documents

This is the set of documents supporting v2.4.5 of HCP for cloud scale. You should have these documents available before using the product.

  • Hitachi Content Platform for Cloud Scale Release Notes (RN‑HCPCS004‑31): This document is for customers and describes new features, product documentation, and resolved and known issues, and provides other useful information about this release of the product.
  • Installing Hitachi Content Platform for Cloud Scale (MK‑HCPCS002‑11): This document gives you the information you need to install or upgrade the HCP for cloud scale software.
  • Hitachi Content Platform for Cloud Scale Administration Guide (MK‑HCPCS008-07): This document explains how to use the HCP for cloud scale applications to configure and operate a common object storage interface for clients to interact with; configure HCP for cloud scale for your users; enable and disable system features; and monitor the system and its connections.
  • Hitachi Content Platform for Cloud Scale S3 Console Guide (MK‑HCPCS009-04): This document is for end users and explains how to use the HCP for cloud scale S3 Console application to use S3 credentials and to simplify the process of creating, monitoring, and maintaining S3 buckets and the objects they contain.
  • Hitachi Content Platform for Cloud Scale Management API Reference (MK‑HCPCS007‑07): This document is for customers and describes the management application programming interface (API) methods available for customer use.

Documentation corrections

The following issues were identified with the documentation, including the online help, after its publication.

Online help, Administration Guide

The following refers to the online help available in the Object Storage Management application profile menu under Help as well as to the Administration Guide.

Best practices

In the module "Best practices," in the topic "Best practices for system sizing and scaling" > "Sizing and scaling models": replace the last two paragraphs with the following:

If your planned usage of the HCP for cloud scale system matches one of these use cases it's best to size and scale it as follows:

  • The minimum cluster size is six instances (nodes).
  • With fewer than eight instances (nodes), do not scale resource-intensive services across more than one master node.
  • With eight or more instances (nodes), do not scale resource-intensive services across any master nodes.

If, however, your planned usage of the HCP for cloud scale system does not match any of these use cases it's best to size and scale it as follows:

  • The minimum cluster size is four instances (nodes).
  • With four or five instances (nodes), do not scale resource-intensive services across more than two master nodes.
  • With 6-11 instances (nodes), do not scale resource-intensive services across more than one master node.
  • With 12 or more instances (nodes), do not scale resource-intensive services across any master nodes.