About this document
This document gives late-breaking information about HCP for cloud scale v2.3.3. It includes information that was not available at the time the technical documentation for this product was published, a list of new features, a list of resolved issues, and a list of known issues and where applicable their workarounds.
This document is intended for customers and Hitachi Vantara partners who license and use HCP for cloud scale.
The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information: https://support.hitachivantara.com/en_us/contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.
About this release
This is build 22.214.171.124 of the Hitachi Content Platform for cloud scale (HCP for cloud scale) software.
HCP for cloud scale is a software-defined object storage solution that is based on a massively parallel microservice architecture and is compatible with the Amazon Simple Storage Service (Amazon S3) application programming interface (API). HCP for cloud scale is especially well suited to service applications requiring high bandwidth and compatibility with the Amazon S3 API.
Features in v2.3.3
HCP for cloud scale v2.3.3 includes the following features.
This version resolves an issue with upgrading to v2.3.2 from versions before v2.3.0 that prevented table migration from completing without manual intervention.
The minimum and standard hardware configuration requirements that have been qualified are revised.
This section lists the hardware, networking, and operating system requirements for running an HCP for cloud scale system with one or more instances.
To install HCP for cloud scale on on-premises hardware for production use, you must provision at least four instances (nodes) with sufficient CPU, RAM, disk space, and networking capabilities. This table shows the hardware resources required for each instance of an HCP for cloud scale system for a minimum qualified configuration and a standard qualified configuration.
|Resource||Minimum configuration||Standard configuration|
|CPU||Single CPU, 10-core||Dual CPU, 20+-core|
|RAM||128 GB||256 GB|
|Available disk space||(4) 1.92 TB SSD, RAID10||(8) 1.92 TB SSD, RAID10|
|Network interface controller (NIC)||(2) 10 Gb Ethernet NICs||(2) 25 Gb Ethernet NICs or|
(4) 10 GB Ethernet NICs
The following table shows the minimum qualified software configuration and standard qualified software configuration for each instance in an HCP for cloud scale system.
|Resource||Minimum configuration||Standard configuration|
|IP addresses||(1) static||(2) static|
|Firewall Port Access||Port 443 for S3 API |
Port 8000 for System Management App GUI
Port 9099 for MAPI and Object Storage Management App GUI
|Network Time||IP address of time service (NTP)||Same|
Operating system and Docker minimum requirements
Each server or virtual machine you provide must have the following:
- 64-bit Linux distribution
- Docker version installed: Docker Community Edition 18.09.0 or later
- IP and DNS addresses configured
Additionally, you should install all relevant patches on the operating system and perform appropriate security hardening tasks.
To execute scripts provided with the product on RHEL, you should install Python.
Operating system and Docker qualified versions
This table shows the operating system, Docker, and SELinux configurations with which the HCP for cloud scale system has been qualified.
|Operating system||Docker version||Docker storage configuration||SELinux setting|
|Red Hat Enterprise Linux 8.4||Docker Community Edition 19.03.12 or later||overlay2||Enforcing|
If you are installing on Amazon Linux, before deployment, edit the file /etc/security/limits.conf on every node to add the following two lines:
* hard nofile 65535 * soft nofile 65535
The Docker installation folder on each instance must have at least 20 GB available for storing the HCP for cloud scale Docker images.
Make sure that the Docker storage driver is configured correctly on each instance before installing HCP for cloud scale. To view the current Docker storage driver on an instance, run docker info.
If you are using the Docker devicemapper storage driver:
- Make sure that there's at least 40 GB of Docker metadata storage space available on each instance. HCP for cloud scale needs 20 GB to install successfully and an additional 20 GB to successfully update to a later version. To view Docker metadata storage usage on an instance, run docker info.
- On a production system, do not run
loop-lvmmode. This can cause slow performance or, on certain Linux distributions, HCP for cloud scale might not have enough space to run.
You should decide whether you want to run SELinux on system instances and enable or disable it before installing HCP for cloud scale. To enable or disable SELinux on an instance, you must restart the instance. To view whether SELinux is enabled on an instance, run: sestatus
To enable SELinux on the system instances, use a Docker storage driver that supports it. The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.
Time source requirements
If you are installing a multi-instance system, each instance should run NTP (network time protocol) and use the same external time source. For information, see support.ntp.org.
The following browsers are qualified for use with HCP for cloud scale software. Other browsers or versions might also work.
- Google Chrome (latest version as of the date of this publication)
- Mozilla Firefox (latest version as of the date of this publication)
Installation or upgrade considerations
This section provides information about installing or upgrading HCP for cloud scale software.
Before beginning an upgrade, ensure that the heap size for the Sentinel service is 8 GiB (which you enter as 8g or 8192m). If the value is too small, upgrade pre-check fails and reports a Sentinel service error.
If you very recently updated to HCP for cloud scale v2.3.0, v2.3.1, or v2.3.2, your system might still be undergoing table migration for consistent listing, which was a major feature in the 2.3 upgrade. Until this migration has completed, you cannot update to 2.3.3 (or any later version). The update software checks the migration state and will not start while migration is ongoing. In this case you must wait for migration to finish and then restart the update to v2.3.3. Depending on the number of objects in the system, table migration can take anywhere from a day to about a week.
You can monitor the progress of migration using the Metrics service to monitor the metric deprecated_metadata_clientobject_active_count. This metric gives the count of objects in deprecated partitions. It decrements during the migration, reaching 0 when all objects are migrated. If this metric is not 0, then migration is not complete. You can use this metric to estimate the migration time remaining, though even after the value reaches 0 there will still be some cleanup.
The following HCP for cloud scale issues are resolved in this release.
The following table lists resolved issues affecting system management.
|ASP-11334||Upgrade||After upgrade to v2.3.2, table migration fails to complete|
When upgrading from v2.2.1 or earlier to v2.3.2, table migration fails to complete without any status alert.
This issue is resolved. After an upgrade to v2.3.3, migration completes successfully.
Note: When migration is complete, the metric deprecated_metadata_clientobject_active_count reaches 0.
The following issues with HCP for cloud scale have been identified in this release.
Don't try to initialize the encryption key management server (the Vault service) manually outside of HCP for cloud scale. Doing so results in data loss.
The S3 Console application uses the Metrics service to fetch bucket information. So the S3 Console application can display bucket information and statistics to users, configure the port hosting the Metrics service as external.
Object storage management
The following table lists known issues affecting object storage management.
|ASP-2422||Tracing Agent||Incorrect alert message during manual deployment|
When manually deploying a four-node, multi-instance system, the Tracing Agent service returns an alert that the service is below the needed instance count even when the correct number of service instances are deployed.
If you have deployed the correct number of instances you can safely ignore this alert.
|ASP-3081||Management API||API job methods are not supported|
A number of API methods refer to jobs. Jobs are not supported in this release.
|ASP-3119||MAPI Gateway||Blocked thread on authorization timeout|
Authentication and authorization use a system management authorization client service which has a different timeout interval. If a management API authorization or authentication request times out but the underlying client service doesn't, the thread is blocked.
Stop and restart the MAPI Gateway service container.
|ASP-3170||MAPI Gateway||Certain API methods are public|
The MAPI schema includes public API methods, which do not need OAuth tokens.
None needed. The public API methods do not need OAuth tokens.
|ASP-3297||Storage Management||Cannot write to storage even though storage is available|
The storage component to which data is written is selected at random. If a filled storage component is selected, the write might fail.
Use the Object Storage Management application or the MAPI method
|ASP-6630||Storage Management||Setting encryption from multiple clients simultaneously can render existing storage component inaccessible|
If two accounts try to set the encryption flag simultaneously, either using the GUI or the management API method
|ASP-7239||Storage Management||Storage component host name final segment allowed to begin with number|
When configuring a storage component, the last segment of the host name can't begin with a number (for example,
Ensure that the last segment of the host name does not begin with a number before proceeding.
The following table lists known issues affecting system management.
|ASP-3379||Configuration||Cannot set refresh token timeout value|
The Refresh Token Timeout configuration value in the System Management application (Configuration > Security > Settings) has no effect.
|ASP-9921||System update||System update requires manual steps if DARE is enabled|
If data-at-rest encryption (DARE) is enabled, during an upgrade you must monitor the process, detect when the Key Management Server service restarts, and then unseal the vault. S3 traffic is blocked until the vault is unsealed.
The KMS service is now the last service restarted, and you can monitor the progress in the System Management application. An event is logged (
As soon as the KMS service is updated, unseal the vault. (You can keep the unseal keys at hand to enter them immediately.)
|ENS-7957 (FNDD-476)||System update||Network types cannot be configured for new services before system update |
Before starting an update, you are prompted to specify the network configuration for any new services included in the version that you're updating to. However, you can specify only the port numbers for the new service. You cannot specify the network type (that is, internal or external) for the service to use. Each new service gets the default network type, which is determined by the service itself.
|ENS-7962 (FNDD-570)||System update||Uploading an update package fails after the failure and recovery of a system instance |
If a system instance enters the state Down, when you try to upload an update package the upload fails. However, after the system instance recovers, when you try again to upload an update package, the upload fails again even though the system is in a healthy state.
|ENS-7964 (FNDD-15)||Volumes||Volume configuration is not displayed correctly in System Management application |
During installation, you can configure volumes for system services by specifying different values in the volume.config file on each system instance. Each volume is correctly configured with the settings you specify, but the Monitoring > Services > Service Details page in the System Management application incorrectly shows each volume as having identical configurations.
|ENS-8299 (FNDD-545)||Service deployment||Database service fails to deploy|
The Cassandra service can fail to deploy with the error Could not contact node over JMX. The log file on the node running the service instance includes the following entry: java.lang.RuntimeException: A node required to move the data consistently is down (/nnn.nnn.nnn.nnn). If you wish to move the data from a potentially inconsistent replica, restart the node with -Dcassandra.consistent.rangemovement=false
|ENS-10750 (FNDD-19)||Updates||Update volume prechecks not performed|
Validation of volume configuration values is not honored by the upgrade process. As a result, configuration values passed to Docker are not valid.
Use caution when specifying volume values.
|FNDD-970||MAPI||If IdP is unavailable, threads blocked|
HCP for cloud scale uses a System Management function to validate tokens. The function does not time out. If the identity provider is unavailable, the requesting thread is blocked.
The following table lists known issues affecting the S3 Console application.
|ASP-9373||S3 Console||No message confirming bucket creation|
When a bucket is created, a confirmation message should be displayed. No message is displayed.
After creating a bucket, list the buckets to verify that the bucket was created.
This is the set of documents supporting v2.3.3 of HCP for cloud scale. You should have these documents available before using the product.
- Hitachi Content Platform for Cloud Scale Release Notes (RN‑HCPCS004‑21): This document is for customers and describes new features, product documentation, and resolved and known issues, and provides other useful information about this release of the product.
- Installing Hitachi Content Platform for Cloud Scale (MK‑HCPCS002‑10): This document gives you the information you need to install or update the HCP for cloud scale software.
- Hitachi Content Platform for Cloud Scale Administration Guide (MK‑HCPCS008-06): This document explains how to use the HCP for cloud scale applications to configure and operate a common object storage interface for clients to interact with; configure HCP for cloud scale for your users; enable and disable system features; and monitor the system and its connections.
- Hitachi Content Platform for Cloud Scale S3 Console Guide (MK‑HCPCS009-03): This document is for end users and explains how to use the HCP for cloud scale S3 Console application to use S3 credentials and to simplify the process of creating, monitoring, and maintaining S3 buckets and the objects they contain.
- Hitachi Content Platform for Cloud Scale Management API Reference (MK‑HCPCS007‑07): This document is for customers and describes the management application programming interface (API) methods available for customer use.
The following issues were identified with the documentation, including the online help, after its publication.
The following refers to the Installation Guide.
In the module "System requirements and sizing," in the topic "Operating system and Docker qualified versions," in the table: remove the note "Technical support not available for Docker Community Edition."
Online help, Administration Guide
The following refers to the online help available in the Object Storage Management application profile menu under Help as well as to the Administration Guide.
In the module "System management," in the topic "Instances" > "Requirements for running system instances" > "Operating system and Docker qualified versions," in the table: remove the note "Technical support not available for Docker Community Edition."
Management API reference information
The following refers to the management API reference information available in the Object Storage Management application profile menu under REST API.
The information describes endpoints related to jobs. Jobs are not supported in this release.