Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Hitachi Content Platform for Cloud Scale v2.4.2 Release Notes

About this document

This document gives late-breaking information about HCP for cloud scale v2.4.2. It includes information that was not available at the time the technical documentation for this product was published, a list of new features, a list of resolved issues, and a list of known issues and where applicable their workarounds.

Intended audience

This document is intended for customers and Hitachi Vantara partners who license and use HCP for cloud scale.

Getting help

The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information: https://support.hitachivantara.com/en_us/contact-us.html.

Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.

About this release

This is build 2.4.2.2 of the Hitachi Content Platform for cloud scale (HCP for cloud scale) software.

Major features

HCP for cloud scale is a software-defined object storage solution that is based on a massively parallel microservice architecture and is compatible with the Amazon Simple Storage Service (Amazon S3) application programming interface (API). HCP for cloud scale is especially well suited to service applications requiring high bandwidth and compatibility with the Amazon S3 API.

Features in v2.4.2

The HCP for cloud scale v2.4.2 release resolves the following issues:

  • This release corrects an issue in which S3 object listing requests may time out if a Metadata-Gateway instance goes down.

  • This release corrects an issue in which systems that had data at rest encryption (DARE) enabled showed performance regression when working with larger objects.

System requirements

This section lists the hardware, networking, and operating system requirements for running an HCP for cloud scale system with one or more instances.

Hardware requirements

To install HCP for cloud scale on on-premises hardware for production use, you must provision at least four instances (nodes) with sufficient CPU, RAM, disk space, and networking capabilities. This table shows the hardware resources required for each instance of an HCP for cloud scale system for a minimum qualified configuration and a standard qualified configuration.

Resource

Minimum configuration

Standard configuration

CPU

Single CPU, 10-core

Dual CPU, 20+-core

RAM

128 GB

256 GB

Available disk space

(4) 1.92 TB SSD, RAID10

(8) 1.92 TB SSD, RAID10

Network interface controller (NIC)(2) 10 Gb Ethernet NICs(2) 25 Gb Ethernet NICs or

(4) 10 GB Ethernet NICs

ImportantEach instance uses all available RAM and CPU resources on the server or virtual machine on which it's installed.

Software requirements

The following table shows the minimum requirements and best-practice software configurations for each instance in an HCP for cloud scale system.

ResourceMinimumBest
IP addresses(1) static(2) static
Firewall Port AccessPort 443 for SSL traffic

Port 8000 for System Management App GUI

Port 8888 for Content Search App GUI

Same
Network TimeIP address of time service (NTP)Same

Operating system and Docker minimum requirements

Each server or virtual machine you provide must have the following:

  • 64-bit Linux distribution
  • Docker version installed: Docker Community Edition 18.09.0 or later
  • IP and DNS addresses configured

Additionally, you should install all relevant patches on the operating system and perform appropriate security hardening tasks.

ImportantThe system cannot run with Docker versions before 1.13.1.

To execute scripts provided with the product on RHEL, you should install Python.

Operating system and Docker qualified versions

This table shows the operating system, Docker, and SELinux configurations with which the HCP for cloud scale system has been qualified.

ImportantAn issue in Docker Enterprise Edition 19.03.15 and resolved in 20.10.5 prevented HCP for cloud scale deployment. Do not install any version of Docker Enterprise Edition above 19.03.14 and below 20.10.5.
Operating systemDocker versionDocker storage configurationSELinux setting
Red Hat Enterprise Linux 8.4Docker Community Edition 19.03.12 or lateroverlay2Enforcing

If you are installing on Amazon Linux, before deployment, edit the file /etc/security/limits.conf on every node to add the following two lines:

*  hard  nofile  65535
*  soft  nofile  65535

SELinux considerations

You should decide whether you want to run SELinux on system instances and enable or disable it before installing HCP for cloud scale. To enable or disable SELinux on an instance, you must restart the instance. To view whether SELinux is enabled on an instance, run: sestatus

To enable SELinux on the system instances, use a Docker storage driver that supports it. The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.

Time source requirements

If you are installing a multi-instance system, each instance should run NTP (network time protocol) and use the same external time source. For information, see support.ntp.org.

Supported browsers

The following browsers are qualified for use with HCP for cloud scale software. Other browsers or versions might also work.

  • Google Chrome (latest version as of the date of this publication)
  • Microsoft Edge (latest version as of the date of this publication)
  • Mozilla Firefox (latest version as of the date of this publication)

Installation or upgrade considerations

This section provides information about installing or upgrading HCP for cloud scale software.

Upgrades from versions before v2.3.0

Upgrades from version v2.2.1 or earlier directly to v2.4.2 are not supported. An upgrade pre-check examines the software version, and if the version is v2.2.1 or earlier the upgrade does not proceed. If your HCP for cloud scale software is at version 2.2.1 or earlier, you must upgrade at least to version 2.3.0 (version 2.3.3 is recommended) before you can upgrade to version 2.4.2.

Best practices for system sizing

The best practice for sizing systems for common use cases and distributing product services across instances has been revised.

For information on system sizing and distributing services, refer to Online help, Administration Guide, which updates the information in the online help and the Administration Guide.

Upgrading systems with synch-to policies

In v2.4.2, only synch-to policies with single destinations are supported.

An upgrade pre-check examines all buckets with synch-to policies. If a bucket has a synch-to policy with multiple destinations the upgrade does not proceed and the owner of the bucket is notified.

Upgrading from v2.3.0, v2.3.1, or v2.3.2 to v2.4.2

Before beginning an upgrade, ensure that the heap size for the Sentinel service is 8 GiB (which you enter as 8g or 8192m). If the value is too small, upgrade pre-check fails and reports a Sentinel service error.

If you very recently updated to HCP for cloud scale v2.3.0, v2.3.1, or v2.3.2, your system might still be undergoing table migration for consistent listing, which was a major feature in the 2.3 upgrade. Until this migration has completed, you cannot update to 2.4.2 (or any later version). The update software checks the migration state and will not start while migration is ongoing. In this case you must wait for migration to finish and then restart the update to v2.4.2. Depending on the number of objects in the system, table migration can take anywhere from a day to about a week.

You can monitor the progress of migration using the metric deprecated_metadata_clientobject_active_count in Prometheus. This metric gives the count of objects in deprecated partitions. It decrements during the migration, reaching 0 when all objects are migrated. You can use this metric to estimate of the time remaining, though even after the value reaches 0 there will still be some cleanup.

Upgrade if DARE is enabled

If data-at-rest encryption (DARE) is enabled, during an upgrade you must monitor the process, detect when the Key Management Server service restarts, and then unseal the vault. S3 traffic is blocked until the vault is unsealed.

The KMS service is the last service restarted, and you can monitor the progress in the System Management application. An event is logged when the service restarts (8007, Update Completed) and you can configure email notification.

As soon as the KMS service is updated, unseal the vault. (You can keep the unseal keys at hand to enter them immediately.)

Resolved issues

The following HCP for cloud scale issues are resolved in this release.

Object storage management

The following table lists resolved issues affecting object storage management.

IssueArea affectedDescription
ASP-3081Management APIAPI job methods are not supported

A number of API methods refer to jobs. Jobs are not supported in this release.

Resolution

This issue is resolved. References to Object Storage Management jobs have been removed.

ASP-12307Metadata GatewayListing requests time out on Metadata-Gateway instances

This release corrects an issue in which S3 object listing requests may time out if a Metadata-Gateway instance goes down.

Resolution

This issue is resolved.

ASP-12708Metadata GatewayThroughput reduced on DARE systems

After an installation or upgrade to v2.4.1 on systems with data-at-rest-encryption (DARE) enabled, large object throughput was reduced.

Resolution

This issue is resolved.

Known issues

The following issues with HCP for cloud scale have been identified in this release.

Object storage management

The following table lists known issues affecting object storage management.

IssueArea affectedDescription
ASP-2422Tracing AgentIncorrect alert message during manual deployment

When manually deploying a four-node, multi-instance system, the Tracing Agent service returns an alert that the service is below the needed instance count even when the correct number of service instances are deployed.

Workaround

If you have deployed the correct number of instances you can safely ignore this alert.

ASP-3119MAPI GatewayBlocked thread on authorization timeout

Authentication and authorization use a system management authorization client service which has a different timeout interval. If a management API authorization or authentication request times out but the underlying client service doesn't, the thread is blocked.

Workaround

Stop and restart the MAPI Gateway service container.

ASP-3170MAPI GatewayCertain API methods are public

The MAPI schema includes public API methods, which do not need OAuth tokens.

Workaround

None needed. The public API methods do not need OAuth tokens.

ASP-11259S3 APINo eTag validation on parts of mirrored multi-part uploads

MD5/eTag validation is not performed on mirrorUploadPart operations.

Workaround

Use transport-layer security (TLS) between endpoints for mirrored operations.

System management

The following table lists known issues affecting system management.

IssueArea affectedDescription
ASP-3379ConfigurationCannot set refresh token timeout value

The Refresh Token Timeout configuration value in the System Management application (Configuration > Security > Settings) has no effect.

ASP-11695System updateUpgrade to v2.4 hangs if multiple buckets have synch-to policies with multiple destinations

The v2.4 upgrade pre-check detects if a user bucket has a synch-to policy with multiple destinations, which is not supported in v2.4, and the upgrade does not proceed. However, if multiple buckets have synch-to policies with multiple destinations, the upgrade pre-check can hang in a checking loop.

If this happens, in the System Management application, on the Update page, the Status tab shows the status "Running_prechecks" but the Install tab displays "Update cluster - Error" with 22 steps completed.

In this state the View Details and Retry buttons are available only momentarily during each checking cycle.

Workaround

  1. Determine which buckets violate the pre-check conditions. In the System Management application, do one of the following:
    • If a browser has a high-latency connection to the HCP for cloud scale system, the View Details button might be visible long enough to click it. This option is preferable because the details are more readable and list which buckets need correction, and because once details are displayed the Retry button becomes available, which starts another update attempt without the need to locate and stop the Sentinel service.
    • If the first option is not available, check the Sentinel service to determine the node on which the service instance is running. Examine the Sentinel service log.
  2. If examining logs, look for an update error of the form:
    'com.hds.ensemble.plugin.update.UpdateOperationFailedException: Mirror configuration for the following buckets 
    contained external configurations: [bucket1, bucket2, bucket3]' error
  3. Reconfigure the synch-from policies on the buckets.
  4. If the Retry button is available, click it now; otherwise, enter the following command to stop the Sentinel service on the node, which triggers the service to restart:
    /docker stop sentinel-service
FNDO-373VolumesVolume configuration is not displayed correctly in System Management application

During installation, you can configure volumes for system services by specifying different values in the volume.config file on each system instance. Each volume is correctly configured with the settings you specify, but the Monitoring > Services > Service Details page in the System Management application incorrectly shows each volume as having identical configurations.

FNDO-512System updateNetwork types cannot be configured for new services before system update

Before starting an update, you are prompted to specify the network configuration for any new services included in the version that you're updating to. However, you can specify only the port numbers for the new service. You cannot specify the network type (that is, internal or external) for the service to use. Each new service gets the default network type, which is determined by the service itself.

FNDO-758MAPIIf IdP is unavailable, threads blocked

HCP for cloud scale uses a System Management function to validate tokens. The function does not time out. If the identity provider is unavailable, the requesting thread is blocked.

FNDO-931UpdatesUpdate volume prechecks not performed

Validation of volume configuration values is not honored by the upgrade process. As a result, configuration values passed to Docker are not valid.

Workaround

Use caution when specifying volume values.

FNDO-1029System updateUploading an update package fails after the failure and recovery of a system instance

If a system instance enters the state Down, when you try to upload an update package the upload fails. However, after the system instance recovers, when you try again to upload an update package, the upload fails again even though the system is in a healthy state.

Workaround

  1. In the System Management application, go to the Monitoring > Processes page and for the Upload Plugin Bundle task click Retry Task.
  2. Upload the update package again.
FNDO-1062Service deploymentDatabase service fails to deploy

The Cassandra service can fail to deploy with the error Could not contact node over JMX. The log file on the node running the service instance includes the following entry: java.lang.RuntimeException: A node required to move the data consistently is down (/nnn.nnn.nnn.nnn). If you wish to move the data from a potentially inconsistent replica, restart the node with -Dcassandra.consistent.rangemovement=false

Workaround

  1. Restart the Cassandra container running on that node.
  2. Redeploy the service.

S3 Console

The following table lists known issues affecting the S3 Console application.

IssueArea affectedDescription
ASP-11600S3 ConsoleEnabling object lock without permission fails with briefly displayed error message

If a user without permission to set object locks creates a bucket and tries to enable object locking, the bucket is created and object locking is not enabled, and a message briefly appears indicating that object locking was not enabled.

Workaround

Users need specific permissions to set object locking on their bucket. In the System Management application, select Configuration Security Roles and set the Data Service permissions data:bucket:objectlock:get and data:bucket:objectlock:set to Yes for the user's role.

If a user needs to configure both bucket object lock and bucket expiration lifecycle rules, assign the following permissions for the user's role:

  • data:bucket:objectlock:get
  • data:bucket:objectlock:set
  • data:bucket:expirationlifecycle:get
  • data:bucket:expirationlifecycle:set

Docker considerations

The Docker installation folder on each instance must have at least 20 GB available for storing the HCP for cloud scale Docker images.

Make sure that the Docker storage driver is configured correctly on each instance before installing HCP for cloud scale. To view the current Docker storage driver on an instance, run docker info.

NoteAfter installation, changing the Docker storage driver requires a reinstallation of HCP for cloud scale.

If you are using the Docker devicemapper storage driver:

  • Make sure that there's at least 40 GB of Docker metadata storage space available on each instance. HCP for cloud scale needs 20 GB to install successfully and an additional 20 GB to successfully update to a later version. To view Docker metadata storage usage on an instance, run docker info.
  • On a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, HCP for cloud scale might not have enough space to run.

Related documents

This is the set of documents supporting v2.4.2 of HCP for cloud scale. You should have these documents available before using the product.

  • Hitachi Content Platform for Cloud Scale Release Notes (RN‑HCPCS004‑25): This document is for customers and describes new features, product documentation, and resolved and known issues, and provides other useful information about this release of the product.
  • Installing Hitachi Content Platform for Cloud Scale (MK‑HCPCS002‑11): This document gives you the information you need to install or update the HCP for cloud scale software.
  • Hitachi Content Platform for Cloud Scale Administration Guide (MK‑HCPCS008-07): This document explains how to use the HCP for cloud scale applications to configure and operate a common object storage interface for clients to interact with; configure HCP for cloud scale for your users; enable and disable system features; and monitor the system and its connections.
  • Hitachi Content Platform for Cloud Scale S3 Console Guide (MK‑HCPCS009-04): This document is for end users and explains how to use the HCP for cloud scale S3 Console application to use S3 credentials and to simplify the process of creating, monitoring, and maintaining S3 buckets and the objects they contain.
  • Hitachi Content Platform for Cloud Scale Management API Reference (MK‑HCPCS007‑07): This document is for customers and describes the management application programming interface (API) methods available for customer use.

Documentation corrections

The following issues were identified with the documentation, including the online help, after its publication.

Online help, Administration Guide

The following refers to the online help available in the Object Storage Management application profile menu under Help as well as to the Administration Guide.

Best practices

In the module "Best practices," in the topic "Best practices for system sizing and scaling" > "Sizing and scaling models": replace the last two paragraphs with the following:

If your planned usage of the HCP for cloud scale system matches one of these use cases it's best to size and scale it as follows:

  • The minimum cluster size is six instances (nodes).
  • With fewer than eight instances (nodes), do not scale resource-intensive services across more than one master node.
  • With eight or more instances (nodes), do not scale resource-intensive services across any master nodes.

If, however, your planned usage of the HCP for cloud scale system does not match any of these use cases it's best to size and scale it as follows:

  • The minimum cluster size is four instances (nodes).
  • With four or five instances (nodes), do not scale resource-intensive services across more than two master nodes.
  • With 6-11 instances (nodes), do not scale resource-intensive services across more than one master node.
  • With 12 or more instances (nodes), do not scale resource-intensive services across any master nodes.