Content Platform 9.4.0 Release notes - Customer
About this document
This document contains release notes for Hitachi Content Platform 9.4.0.
Release highlights
HCP supports encryption of data stored on HCP S-series nodes and in cloud storage tiers using keys from Key Management Server (KMS), utilizing the KMIP API for communication with the KMS. HCP 9.4.0 is qualified with Cipher Trust manager 2.4, 2.6, 2.7. However, any KMS that is compliant with KMIP API 1.4 or later is supported with HCP 9.4.0.
In HCP 9.4.0, several enhancements are made to increase HCP's compatibility with the AWS S3 API:
- Support to delete a specific version of an object using S3 API is added. This functionality has been supported via the REST API in previous releases, but this release expands that functionality to the S3 API.
- Ability to delete a delete marker version of an object, which also makes prior version of the object a current version
- Ability to overwrite an object using the S3 API. For non-versioned buckets in HCP 9.4.0, if a PUT request is executed for an existing object the PUT request creates a new version of the object. This functionality is not supported with the REST, NFS, or CIFS API.
TLS 1.3 is the latest version of the Internet’s most deployed security protocol, which encrypts data to provide a secure communication channel between two endpoints. When TLS 1.3 is configured as the minimum security protocol, the 3DES ciphers are not available and, as a result, the Enable 3DES Ciphers option becomes grayed-out.
This feature provides additional password policy settings, such as blocking password re-use and blocking common passwords, along with the existing policies to prevent vulnerabilities and to enable organizations to comply with the latest password or security regulations.
This enhancement allows multiple nodes to be added to an existing HCP cluster in a single step, while minimizing production impact. This also reduces downtime and maintenance window requirements.
Enhancements have been made to the System Management Console (SMC) to make it easier to identify the type of HCP system configuration. In addition, a set of checklist items has been added to the documentation to aid the process of performing the node recovery service procedure with specific HCP configurations.
Intelligence has been added in HCP 9.4.0 to check for pre-set parameters related to length of time of node outage and frequency of node rolls before reporting a node outage. At times, a node in an HCP cluster might go down for a short period of time because of a planned event, or a temporary unplanned issue in a customer's network. If such an outage is temporary, it is not useful to generate alerts for node outage. This capability is available via SNMP.
Upgrade notes
If a namespace with the support delete markers option enabled is added to a replication link for a pre-HCP v9.4 cluster, the replication link will be paused until the older cluster is upgraded to HCP v9.4.
Upgrades to HCP 9.4.0 will fail if any of the namespaces have HSwift protocol enabled. Disable HSwift protocol on the relevant namespaces before upgrading. Support for HSwift protocol has ended with HCP 9.3.0.
If you try to replicate namespaces with unbalanced directory mode enabled to a pre-HCP 9.3.0 version cluster, the affected tenant on the replication link will be paused. The other tenants on the replication link will continue to replicate. It is recommended to upgrade all clusters in the replication topology to HCP 9.3.0 before using the unbalanced directory mode feature.
Upgrades to version 9.2.1 or later will fail if any service plans exist that have SMTP enabled and use direct write to HCP S Series Nodes as the primary ingest tier. Please modify these service plans before upgrading to version 9.2.1 or later. For more information, please contact your authorized HCP service provider.
If you attempt to replicate objects that contain a labeled retention hold to a pre-version 9.1 cluster, the affected tenant (all its replicated namespaces) on the replication link will be paused, while other tenants on the link continue to replicate. Therefore, it is recommended to upgrade all clusters in the replication topology to version 9.1 before using the labeled retention hold feature.
You can upgrade an HCP system to version 9.x only from version 8.x. You cannot downgrade HCP to an earlier version.
You must have at least 32 GB of RAM per node to use new software features introduced in HCP version 9.x. While you can upgrade an HCP system to version 9.x with a minimum of 12 GB of RAM per node and receive the patches and bug fixes associated with the upgrade, the system cannot use the new software features in the release. Inadequate RAM causes performance degradation and can negatively affect system stability. If you have less than 32 GB RAM per node and would like to upgrade to this release, contact your Hitachi Vantara account team.
HCP upgrades can occur with the system either online or offline. During an online upgrade, the system remains available to users and applications. Offline upgrades are faster than online upgrades, but the system is unavailable while the upgrade is in progress.
HCP Data Migrator is no longer supported starting with HCP version 9.4.0. HCP Data Migrator was deprecated as of HCP release 9.2.1.
Supported limits
HCP supports the limits listed in the following tables.
Hardware | Support limit |
Maximum number of general access, G Series Access Nodes | 80 |
Maximum number of HCP S Series Nodes | 80 |
Hardware | Support limit |
Maximum number of KMIP servers | 8 |
Logical volume | Support limit |
Maximum number of SAN logical storage volumes per storage node | 63 |
Maximum logical volume size for SAN LUNs | 15.999 TB |
Internal storage | Support limit |
Maximum number of logical storage volumes per storage node RAIN | 4 |
Maximum logical volume size on internal drives | HDD capacity dependent |
Internal storage | Support limit |
Number of SSDs per storage node | 12 (front-cage only) |
Maximum logical volume size on internal drives | SSD capacity dependent |
Maximum number of SAN logical storage volumes per storage node (when SAN is attached to system) | 63 |
Maximum logical volume size for SAN LUNs (when SAN is attached to system) | 15.999 TB |
HCP VM systems — VMware ESXi | Support limit |
Maximum number of logical storage volumes per VM storage node | 1 OS LUN, 59 Data LUNs |
Maximum logical volume size | 15.999 TB |
HCP VM systems — KVM | Support limit |
Maximum number of logical storage volumes per VM storage node | 1 OS LUN Data LUNs: Limited by the number of device slots available for LUNs in the VirtIO-blk para-virtualized storage back-end, which depends on the number of other devices configured for the guest OS that also use the VirtIO-blk back-end. In a typical HCP configuration, 17 slots are available. |
Maximum logical volume size | 15.999 TB OS LUN |
Data storage | Support limit |
Maximum active erasure coding topologies | 1 |
Maximum erasure coding topology size | 6 (5+1) sites |
Minimum erasure coding topology size | 3 (2+1) sites |
Maximum total erasure coding topologies | 5 |
Maximum number of objects per storage node | Standard non-SSD disks for indexes: 800,000,000 SSD for indexes: 1,250,000,000 |
Maximum number of objects per HCP system | 64,000,000,000 (80 nodes times 800,000,000 objects per node) If using 1.9 TB SSD drives: 100,000,000,000 (80 nodes times 1,250,000,000 objects per node) |
Maximum number of directories per node if one or more namespaces are not optimized for cloud | 1,500,000 |
Maximum number of directories per node if all namespaces are optimized for cloud | 15,000,000 |
Maximum number of objects per directory |
By namespace type:
|
Maximum object size by protocol |
|
Maximum total KM servers | 8 |
Hitachi API for Amazon S3: Minimum size for parts in a complete multipart upload request (except the last part) | 1 MB |
Hitachi API for Amazon S3: Maximum part size for multipart upload | 5 GB |
Hitachi API for Amazon S3: Maximum number of parts per multipart upload | 10,000 |
Maximum number of replication links | 20 inbound, 5 outbound |
Maximum number of tenants | 1,000 |
Maximum number of namespaces | 10,000 |
Maximum number of namespaces with the CIFS or NFS protocol enabled | 50 |
User groups and accounts | Support limit |
Maximum number of system-level user accounts per HCP system | 10,000 |
Maximum number of system-level group accounts per HCP system | 100 |
Maximum number of tenant-level user accounts per tenant | 10,000 |
Maximum number of tenant-level group accounts per tenant | 100 |
Maximum number of users in a username mapping file (default tenants only) | 1,000 |
Maximum number of SSO-enabled namespaces | ~1200 (SPN limit in Active Directory) |
Custom metadata | Support limit |
Maximum number of annotations per individual object | 10 |
Maximum non-default annotation size with XML checking enabled | 1 MB |
Maximum default annotation size with XML checking enabled | 1 GB |
Maximum annotation size (both default and non-default) with XML checking disabled | 1 GB |
Maximum number of XML elements per annotation | 10,000 |
Maximum level of nested XML elements in an annotation | 100 |
Maximum number of characters in the name of custom metadata annotation | 32 |
Maximum form size in POST object upload | 1,000,000 B |
Maximum custom metadata size in POST object upload | 2 KB |
Maximum number of SSO-enabled namespaces | ~1200 (SPN limit in Active Directory) |
Access control lists | Support limit |
Maximum size of access control entries per ACL | 1,000 MB |
Metadata query engine | Support limit |
Maximum number of content classes per tenant | 25 |
Maximum number of content properties per content class | 100 |
Maximum number of concurrent metadata query API queries per node | 5 |
Network | Support limit |
Maximum number of user-defined networks (virtual networks) per HCP system | 200 |
Maximum downstream DNS servers | 32 |
Maximum certificates and CSR per domain | 10 |
Storage tiering | Support limit |
Maximum number of storage components | 100 |
Maximum number of storage pools | 100 |
Maximum number of tiers in a service plan | 5 |
Miscellaneous | Support limit |
Maximum number of HTTP connections per node | 255 |
Maximum number of SMTP connections per node | 100 |
Maximum number of attachments per email for SMTP | 50 |
Maximum aggregate email attachment size for SMTP | 500 MB |
Maximum number of access control entries in an ACL | 1,000 |
Maximum number of labeled retention holds per object | 100 |
Supported clients and platforms
The following sections list clients and platforms that are qualified for use with HCP.
Windows clients
These Microsoft® Windows® 32-bit or 64-bit clients are qualified for use with the HTTP v1.1, WebDAV, and CIFS protocols and with the Hitachi API for Amazon S3:
- Windows 2012 R2 (Standard and Data Center editions)
- Windows Server 2016 (Standard and Data Center editions)
- Windows 10
- AIX® 7.1
- HP-UX® 11i v3 (11.31) on PA-RISC® Itanium®
- Red Hat® Enterprise Linux ES 6.10
- Red Hat Enterprise Linux ES 7.0
Unix clients
These Unix clients are qualified for use with the HTTP v1.1, WebDAV, and NFS v3 protocols and with the Hitachi API for Amazon S3:
- HP-UX 11i v3 (11.31) on Itanium
- HP-UX 11i v3 (11.31) on PA-RISC
- AIX 7.1
- Red Hat Enterprise Linux ES 6.10 and 7.0
Browsers
The table below lists the web browsers that are qualified for use with the HCP System Management, Tenant Management, and Search Consoles and the Namespace Browser. Other browsers or versions may also work.
Browser | Client Operating System |
Microsoft Edge | Windows |
Mozilla Firefox® |
Windows HP-UX IBM AIX Red Hat Enterprise Linux Sun Solaris |
Google Chrome® |
Windows HP-UX IBM AIX Red Hat Enterprise Linux Sun Solaris |
*The Consoles and Namespace Browser work in Internet Explorer only if ActiveX is enabled. Also, the Consoles work only if the security level is not set to high. |
Platforms for HCP VM
HCP VM runs on these platforms:
- VMware ESXi 6.5 U1 and U2
- VMware ESXi 6.7 U1, U2, and U3
- VMware ESXi 7.0 (qualified on hardware version 17)
- VMware vSAN 6.6
- VMware vSAN 6.7
- VMware vSAN 7.0
- KVM — qualified on CentOS 7 and Fedora Core 29. For relevant support, configuration, installation, and usage information, see Deploying an HCP-VM System on KVM (MK-94HCP009-06).
Third-party integrations
The following third-party applications have been tested and proven to work with HCP. Hitachi Vantara does not endorse any of the applications listed below, nor does Hitachi Vantara perform ongoing qualification with subsequent releases of the applications or HCP. Use these and other third-party applications at your own risk.
Hitachi API for Amazon S3 tools
These tools are qualified for use with the Hitachi API for Amazon S3:
- CloudBerry Explorer (does not support multipart upload)
- CloudBerry Explorer PRO (for HCP multipart upload, requires using an Amazon S3 compatible account instead of a HCP account; for CloudBerry internal chunking, requires versioning to be enabled on the target bucket)
- S3 Curl
- S3 Browser
Mail servers
These mail servers are qualified for use with the SMTP protocol:
- Microsoft Exchange 2010 (64 bit)
- Microsoft Exchange 2013
- Microsoft Exchange 2016
NDMP backup applications
These NDMP backup applications are qualified for use with HCP:
- Hitachi Data Protection Suite 8.0 SP4 (CommVault® Simpana® 8.0)
- Symantec® NetBackup® 7 — To use NetBackup with an HCP system:
- Configure NDMP to require user authentication (that is, select either the Allow username/pwd authenticated operations or Allow digest authenticated operations option in the NDMP protocol panel for the default namespace in the Tenant Management Console).
- Configure NetBackup to send the following directive with the list of backup paths:
set TYPE=openPGP
Windows Active Directory
HCP is compatible with Active Directory on servers running Windows Server 2012 R2 or Windows Server 2016. In either case, all domain controllers in the forest HCP uses for user authentication must minimally be at the 2012 R2 functional level.
RADIUS protocols
HCP supports the following RADIUS protocols:
- CHAP
- EAPMD5
- MSCHAPv2
- PAP
Supported hardware
The following sections list hardware that is supported for use in HCP systems.
Supported servers
These servers are supported for HCP systems with internal storage:
- HCP G11 (D52BQ-2U)
- HCP G10 (D51B-2U)
These servers are supported for HCP SAN-attached systems with internal storage:
- HCP G11 (D52BQ-2U)
- HCP G10 (D51B-2U)
Server memory
At least 32 GB of RAM per node is needed to use new software features introduced in HCP 9.x. An HCP system can be upgraded to version 9.x with a minimum of 12 GB of RAM per node, and receive the patches and bug fixes that come with the upgrade, but the system cannot use the new software features. Inadequate RAM causes performance degradation and can negatively affect system stability.
If you have less than 32 GB RAM per node and would like to upgrade to HCP 9.x, contact your Hitachi Vantara account team.
Supported storage platforms
These storage platforms are supported for HCP SAIN systems:
- Hitachi Virtual Storage Platform
- Hitachi Virtual Storage Platform G200
- Hitachi Virtual Storage Platform G400
- Hitachi Virtual Storage Platform G600
- Hitachi Virtual Storage Platform G1000
- Hitachi Virtual Storage Platform G1500
- Hitachi Virtual Storage Platform 5100
- Hitachi Virtual Storage Platform 5100H
- Hitachi Virtual Storage Platform 5200
- Hitachi Virtual Storage Platform 5200H
- Hitachi Virtual Storage Platform 5500
- Hitachi Virtual Storage Platform 5500H
- Hitachi Virtual Storage Platform 5600
- Hitachi Virtual Storage Platform 5600H
- Hitachi Virtual Storage Platform E590
- Hitachi Virtual Storage Platform E790
- Hitachi Virtual Storage Platform E990
- Hitachi Virtual Storage Platform E1090
Supported back-end network switches
The following back-end network switches are supported in HCP systems:
- Alaxala AX2430
- Arista 7020SR-24C2-R
- Cisco® Nexus® 3K- C31128PQ-10GE
- Cisco® Nexus® 3K-C31108PC-V
- Cisco® Nexus® 5548UP
- Cisco® Nexus® 93180YC-FX
- Cisco® N9K-C93180YC-FX3
- Cisco® 5596UP
- Dell PowerConnect™ 2824
- ExtremeSwitching™ VDX® 6740
- ExtremeSwitching™ 210
- ExtremeSwitching™ 6720 - SAIN systems only
- HP 4208VL
- Ruckus ICX® 6430-24
- Ruckus ICX® 6430-24P HPOE
- Ruckus ICX® 430-48
Supported Fibre Channel switches
The following Fibre Channel switches are supported for HCP SAIN systems:
- Brocade 5120
- Brocade 6510
- Cisco MDS 9134
- Cisco MDS 9148
- Cisco MDS 9148S
Supported Fibre Channel host bus adapters
These Fibre Channel host bus adapters (HBAs) are supported for HCP SAIN systems:
- Emulex® LPe 32002-M2-Lightpulse
(for supported firmware and boot BIOS versions, refer to the G11 Hardware Tool set)
- Emulex® LPe 11002-M4
(firmware version 2.82a4, boot BIOS 2.02a1)
- Emulex® LPe 12002-M8
(firmware version 1.10a5, boot BIOS 2.02a2)
- Emulex® LPe 12002-M8 (GQ-CC-7822-Y)
(firmware version 1.10a5, boot BIOS 2.02a2)
- Hitachi FIVE-EX 8Gbps
(firmware version 10.00.05.04)
Issues resolved
Issues resolved in this release
The following table lists the issue resolved in HCP 9.4.
Reference Number | SR Number | Description |
HCP-34276 | — | This release resolves an issue with the calculation of available free space for database dump operation during an offline upgrade service procedure. |
HCP-35027 | 01415199 | Resolved a migration completion issue where HCP was erroneously verifying cross mapping in a SAN cluster. |
HCP-35077 | 01048322 | In rare circumstances, in a replicated HCP environment with active-active replication links, an incorrectly replicated annotation change and delete operation, or retention class modification, may be lost when replication subsequently overwrote the correct delete operation in the database. This release resolves the issue. |
HCP-39550 | 02577144 | Resolved an issue where replication got stuck when trying to replicate content class that already existed on the replica. |
HCP-40008 | 02699190 | Resolved a node-roll issue due to an uncaught exception that tried to stop a service that was already stopped. |
HCP-40691 | 02885837 | In prior HCP releases, when the content verification service is disabled, the unbounded growth of the service’s notification listener queue may lead to a Java out of memory error condition. This release of HCP resolves this issue. |
HCP-40805 | 02942839 | Resolved issue of JVM logging excessive messages to
standard output and the accumulation of these messages in /var that
could lead to a subsequent node roll. |
HCP-40811 | — | In prior HCP releases, if an HCP S-series node connected to an HCP cluster was down on the target side of a replication link, replication would erroneously skip replicating data in a tenant that is configured to use a services plan with the S-series node. This release of HCP resolves this issue. |
HCP-40857 | 02874449 | This release resolves an issue that prevented the HCP cluster from returning the correct cluster Fully Qualified Domain Name (FQDN) because a null pointer exception was encountered, which can be observed in the logs. |
HCP-40889 | — | In the HCP Search Console UI, performing a structured query to find objects assigned to specific retention class did not list the retention class names in the query input drop down. This symptom has been resolved in the release. |
HCP-40916 | — | During an online upgrade, the upgrade
service procedure with prior releases of HCP no longer fails with the following error message:
'NoneType' object has no attribute 'getLeaderNode' . |
HCP-41059 | — | Scavenging from Azure failed with an
HDSCheckedException because a variable was set
incorrectly. |
HCP-41124 | — | An attempt to simultaneously delete the same object via AWS S3 API in a versioned bucket no longer results in an UNEXPECTED ERROR error code. |
HCP-41185 | 02762978 | Versioned object delete record no longer stalls replication progress. |
HCP-41320 | 03045046 | HCP does not return a valid XML response if the object name contains illegal XML character(s). |
HCP-41323 | — | This release of HCP resolves an issue with X-Forwarded-For header injection vulnerability. |
HCP-41384 | — | This release of HCP resolves an issue that caused replication to stall because of a deadlock situation in the replication thread cache. This condition was a rare occurrence that was experienced only in internal testing. |
HCP-41878 | — | In rare circumstances, when a new volume is added to a node on an HCP cluster, but that service procedure fails, an existing volume may get reformatted. The existing volume would have to be in initialization state for this symptom to occur, which would be a rare and unexpected condition. This release resolves the symptom, and HCP now handles the error condition of the volume addition properly. |
HCP-42082 | — | This release resolves an issue that prevented HCP replication from replicating simultaneous custom metadata changes when the size of the custom metadata is unchanged. |
HCP-42149 | 03208123 | Copying large files on CIFS share no longer results in CIFS connection-termination issues. |
HCP-42191 | 03352034, 03384806 | Resolved a postgress log ERROR reporting issue when duplicate elimination service is executed. |
HCP-42228 | 03239338, 03407946, 03345357 | Excessive logging by HCP’s SPOCC component no longer fills up filesystems and causes node rolls. |
HCP-42250 | 03245695 | Resolved an Active Directory issue with the value of
the sAMAccountName attribute for the cluster. |
HCP-42471 | — | Resolved a node roll issue caused due to missing placeholder for log entry in message formatter. |
HCP-42604 | 03351934, 03362316, 03619825, 03616263 | HCP defined a hard limit on the number of java threads. Systems that reach this limit could experience an out-of-memory condition. In HCP v9.4, this limit has been removed. |
HCP-42719 | — | HCP list-objects-v2 no longer returns encoded special characters
regardless of whether the --encoding-type option is set. |
HCP-42727 | — | Resolved a BadDigest error reporting issue in the logs while deleting an object that contained a long dash in its name using S3. |
HCP-42827 | — | Improved logging with INFO and WARNING messages if an attack vector is detected. |
HCP-42887 | 03390944, 03546974, 03511088 | NDMP backup job no longer causes an out-of-memory exception and kills JVM. |
HCP-42896 | 03361868, 03242903, 03390410, 03468872, 03464765 | Long periods of flapping UNAVALABLE proxy states on the node. Continuous HCP cluster state map regeneration due to long periods of alternating node states between Available and Unavailable could cause node rolls. |
HCP-42924, HCP-43439 | 03141801, 03264664, 03378761, 03414648, 03452438, 03548101, 03565470 | This release resolves an issue that previously caused HCP to falsely report issues with the power supply when the BMC is temporarily unable to report the status of the power supply units. Customers have also observed CPU reporting errors under similar conditions, which is also resolved by this change. |
HCP-43046 | — | This release resolves an issue with HCP showing External for internal volumes in a SAN-attached HCP environment. |
HCP-43076 | 03416945, 03422733, 03376318 | NPE during JVM startup with "Dropping corrupt or partially initialized schema" could cause a node roll. |
HCP-43089 | 03401796 |
Update OpenJDK to 11.0.10, which has a fix for a Java SIGSEGV Error causing JVM to restart. |
HCP-43395 | 03492325 | Resolved an issue with fixed dates retention class definition that did not work for the eastern hemisphere time zone (+UTC). |
HCP-43496 | 03464765 | Resolved an issue that could lead to node roll due to a UN-handled exception. |
HCP-43649 | — | TLS 1.3 does not support 3DES ciphers. If TLS 1.3 is enabled on a cluster, the check box Enable 3DES ciphers is grayed out on . |
HCP-43655 | 03490396, 03471030 | Resolved an issue causing node rolls due to uncaught exception. |
HCP-43965 | — | When you upgrade to a new HCP release, a replication error may occur while restoring files with custom metadata. This HCP release resolved the issue, and the symptom no longer occurs. |
Common Vulnerabilities and Exposures (CVE) Records resolved in this release
HCP 9.4.0 release resolves the following CVE, in addition to resolving several additional security weaknesses not associated with this CVE:
CVE Record Number | Hitachi Vantara reference number | Description |
CVE-2022-22965 | HCP-42858 | A Spring MVC or Spring WebFlux application running on JDK 9+ may be
vulnerable to remote code execution (RCE) via data binding. The specific exploit
requires the application to run on Tomcat as a WAR deployment. If the application is
deployed as a Spring Boot executable jar (that is, the default), it is not vulnerable
to the exploit. However, the nature of the vulnerability is more general, and there
may be other ways to exploit it. This release of HCP upgraded the Springs library, which resolves CVE-2022-22965 |
CVE-2021-31805 | HCP-43053 | The fix issued for CVE-2020-17530 was incomplete. From Apache
Struts 2.0.0 to 2.5.29, some of the attributes from tag’
could perform a double evaluation if a developer applied a forced OGNL evaluation by
using the %{...} syntax. Using a forced OGNL evaluation on untrusted
user input can lead to a Remote Code Execution and security degradation.This HCP release upgraded the Struts library, which resolves CVE-2021-31805. |
This release also resolves the following CVEs by upgrading Jetty to
v9.4.39.v20210325
:
CVE Record Number | Hitachi Vantara reference number | Description |
CVE-2021-28165 | HCP-43461 | In Eclipse Jetty 7.2.2 to 9.4.38, 10.0.0.alpha0 to 10.0.1, and 11.0.0.alpha0 to 11.0.1, CPU usage can reach 100% upon receiving a large invalid TLS frame. |
CVE-2020-27223 | HCP-43461 | In Eclipse Jetty 9.4.6.v20170531 to 9.4.36.v20210114 (inclusive), 10.0.0, and 11.0.0 when Jetty handles a request containing multiple Accept headers with a large number of “quality” (i.e., q) parameters, the server may enter a denial of service (DoS) state due to high CPU usage processing those quality values, resulting in minutes of CPU time exhausted processing those quality values. |
CVE-2020-27218 | HCP-43461 | In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body inflation is enabled and requests from different clients are multiplexed onto a single connection, and if an attacker can send a request with a body that is received entirely but not consumed by the application, then a subsequent request on the same connection will see that body prepended to its body. The attacker will not see any data but may inject data into the body of the subsequent request. |
CVE-2020-27216 | HCP-43461 | In Eclipse Jetty versions 1.0 through 9.4.32.v20200930, 10.0.0.alpha1 through 10.0.0.beta2, and 11.0.0.alpha1 through 11.0.0.beta2O, on Unix-like systems, the system's temporary directory is shared between all users on that system. A collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory. If the attacker succeeds, they will have read and write permission to the subdirectory used to unpack web applications, including their WEB-INF/lib jar files and JSP files. If any code is ever executed out of this temporary directory, it can lead to a local privilege escalation vulnerability. |
CVE-2019-9518 | HCP-43461 | Some HTTP/2 implementations are vulnerable to a flood of empty frames, potentially leading to a denial of service. The attacker sends a stream of frames with an empty payload and without the end-of-stream flag. These frames can be DATA, HEADERS, CONTINUATION, and/or PUSH_PROMISE. The peer spends time processing each frame disproportionate to attack bandwidth, which can consume excess CPU resources. |
CVE-2019-9516 | HCP-43461 | Some HTTP/2 implementations are vulnerable to a header leak, potentially leading to a denial of service. The attacker sends a stream of headers with a 0-length header name and 0-length header value (optionally Huffman encoded into 1-byte or greater headers). Some implementations allocate memory for these headers and keep the allocation alive until the session ends, which can consume excess memory. |
CVE-2019-9515 | HCP-43461 | Some HTTP/2 implementations are vulnerable to a settings flood, potentially leading to a denial of service. The attacker sends a stream of SETTINGS frames to the peer. Since the RFC requires that the peer reply with one acknowledgement per SETTINGS frame, an empty SETTINGS frame is almost equivalent in behavior to a ping. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both. |
CVE-2019-9514 | HCP-43461 | Some HTTP/2 implementations are vulnerable to a reset flood, potentially leading to a denial of service. The attacker opens a number of streams and sends an invalid request over each stream that should solicit a stream of RST_STREAM frames from the peer. Depending on how the peer queues the RST_STREAM frames, this can consume excess memory, CPU, or both. |
CVE-2019-9512 | HCP-43461 | Some HTTP/2 implementations are vulnerable to ping floods, potentially leading to a denial of service. The attacker sends continual pings to an HTTP/2 peer, causing the peer to build an internal queue of responses. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both. |
CVE-2019-9511 | HCP-43461 | Some HTTP/2 implementations are vulnerable to window size manipulation and stream prioritization manipulation, potentially leading to a denial of service. The attacker requests a large amount of data from a specified resource over multiple streams. They manipulate window size and stream priority to force the server to queue the data in 1-byte chunks. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both. |
Compatibility issues introduced in HCP 8.2 or later
The following table lists the compatibility issues introduced in HCP v8.2 or later. The issues are listed in ascending order by reference number.
Ref. number | Description | Version introduced in |
HCP-33074 HCP-35329 | In HCP v8.2, the HCP software was upgraded to Jetty v9. The upgrade introduces several security enhancements that might impact some deployments:
| HCP v8.2 |
HCP-33583 | HCP now requires that the x-amz-date header value is within 15 minutes of when HCP receives the Hitachi API for Amazon S3 request. | HCP v8.2 |
HCP-33672 | HCP now validates x-amz-date headers on appropriate Hitachi API for Amazon S3 requests. | HCP v8.2 |
HCP-35286 | HCP now sends the severity of the EventID/messages such as NOTICE, WARNING or ERROR to Syslog servers. | HCP v8.1 |
HCP-37063 | Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported. | HCP v8.2 |
HCP-37858 | Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported. | HCP v9.1 |
HCP-43818 |
Certain third-party tools and SDK solutions that connect to HCP through HTTPS may not support TLS v1.3. With the release of HCP 9.4, for example, one such tool was found to be the AWS Command Line Interface (AWS CLI). Similarly, HCP Anywhere 4.5.4 does not yet support TLS 1.3. Therefore, setting HCP to TLS 1.3 minimum stops HCP-AW communications until HCP is changed back to use TLS 1.2 as maximum TLS level. A future release of HCP Anywhere is expected to add support for TLS 1.3; please refer to the appropriate release notes of HCP Anywhere for further information. Before turning on TLS 1.3 in HCP, make sure that the tools and SDKs used to connect to HCP through HTTPS connection do support TLS v1.3. | HCP v9.4 |
Known issues
The next table lists the known issues in the current release of HCP. The issues are listed in order by reference number. Where applicable, the service request number is also shown.
Reference Number | SR Number | Description |
HCP-44280 | — | When retrieving SNMP monitoring information from an HCP system with a Key Management Server (KMS) configuration, the KMS and encrypted storage pool information for a service plan is not returned correctly. This symptom can also be observed when monitoring an HCP system through Hitachi Remote Ops agent. |
HCP-43908 | — | After a ZCF failover occurs in a SAN-attached HCP cluster, the System Management Console (SMC) Hardware page is not accessible for approximately 5 minutes. After that time, the page should become available. |
HCP-43905 | 03499217, 03502301, 03570467 | During an autonomic technical refresh (ATR) migration, node rolls might occur due to a null pointer exception. |
HCP-43566 | 03058939, 03528214 | VLAN may not come online if the management network is set up with Vlan until HCP nodes are rebooted. |
HCP-43527 | 03422907 | If outbound traffic is blocked while incoming traffic continues to
flow, a transmission time-out problem can occur because the bonding driver
arp_validate setup does not detect half-broken links. This
results in backend network communication problems. |
HCP-43479 | 03402060 | During log rotation, arc-rotate can stop (kill)
JVM if log files to be rotated are open. |
HCP-43284 | — | In some circumstances, an offline upgrade might fail because the
HCP shutdown
process cannot unmount an encrypted archive volume. If this failure occurs, consult
Hitachi VantaraSupport. An offline upgrade failure adds the following two lines to the HCP logs:
Standard ERROR for 'dmsetup remove --force archive001-crypt': device-mapper: remove ioctl on archive001-crypt failed: Device or resource busy |
HCP-43082 | 03422540 | The arc-deploy during finalize migration might
fail halfway if a node roll occurs at the same time. |
HCP-41880 | — | The MQE indexer skips objects whose metadata includes the ampersand (&) or hash (#) character. |
HCP-41176 | 03131048, 03141801, 03171024 | HCP running on a G11 server can raise a false-positive alert about the power supply, CPU, or disk drives. |
HCP-40505 | — | Manually started execution of a service is not persistent. It can be interrupted by the scheduled service or a node event such as a reboot. |
HCP-39876 | 02673882 | In a SAN-attached HCP environment, storage addition procedure may fail, indicating that the procedure fails because of a device mapper name of mpathb (or other mpath device) cannot be formatted. |
HCP-39798 | 02639142 | Solr does not create proper indexing when user ingests a custom metadata containing format other than "Pretty formatted XML." Therefore, annotations with a single line of XML are not parsed properly when doing phrase searches. |
HCP-39465 | — | Objects cannot be deleted using the namespace browser when logged in as an anonymous user. Log in as an authenticated user to delete objects when using the namespace browser. |
HCP-39045 | — | Space occupied by old object versions is not freed by the Garbage Collection service if the object is in a replicated namespace and the replication link is suspended. If feasible, delete the replication link or remove the namespace from replication to work around the issue. |
HCP-38505 | — | HCP appears to send the correct error code, but is inconsistent with AWS in that the size check should occur earlier than it does. As a result, HCP sends a 400 error code rather than sending a 200 error code during the keep-alive procedure. |
HCP-38408 | 02155007 | ntpd tries to bind to usb0 network interface on HCP 9.x G11 system and
causes time synchronization issues. Workaround: On each node, prevent the driver from loading by denylisting in /etc/modprobe.d/aos.conf (that is, add/append the following lines in /etc/modprobe.d/aos.conf: blacklist cdc_ether blacklist usbnet |
HCP-38155 | 02090989 | Resetting advanced settings for an HCP S Series storage component does not work. |
HCP-38048 | — | Service clearPolicyState does not clear rows that have no matching
external_file entries. |
HCP-37935 | — | While troubleshooting the progress of replication, the replication link in the overview page increases when a tenant is paused. The increase in pending data is similar to the total size of the paused namespaces. |
HCP-37851 | — | Starting with release HCP 8.2, all units of systemd-tmpfiles service log errors
messages in /var/log/messages on a daily periodicity. The error log messages are
similar to the following: systemd-tmpfiles[29354]: [/usr/lib/tmpfiles.d/mdadm.conf:1] Line references path below legacy directory /var/run/, updating /var/run/mdadm → /run/mdadm; please update the tmpfiles.d/ drop-in file accordingly. Initial investigation suggests that these error messages cause no functional error symptoms in HCP. |
HCP-37810 | — | When provisioning rear-cage SSD to the HCP cluster on a subset of nodes in a SAN-attached G10 or G11 configuration, the service procedure tries to add rear-cage SSD on both nodes that comprise a Zero-Copy-Failover (ZCF) pair, even if one of those nodes does not have rear-cage SSD to be provisioned. This leads to error in the service procedure. As a work-around, ensure that you provision rear-cage SSDs either for both nodes that comprise a ZCF pair, or simultaneously for all nodes in the cluster. |
HCP-37778 | — | After upgrade of an HCP system is completed, the System Management Console Hardware page may display Initializing status for some of the logical volumes. This is the result of the device SMART error log containing records of error. Please contact Hitachi Vantara technical support to identify the error condition and the corrective action to resolve the symptom. |
HCP-37754 | — | HCP
installed in an ESXi environment may display the following FSTRIM error message on
the System Management Console: Failure encountered attempting to trim
volumes on nodes: , and an error with Event ID 2818 is listed in the error
log under Major Events.Please contact Hitachi Vantara customer support if you encounter this error message. |
HCP-37753 | — | HCP
system goes in read-only state because of node rolls due to metadata manager not
starting up. The system might even appear to be unstable.. Workaround: Reboot the system. |
HCP-37696 | 01612339 | MQE shard / solr core balancing doesn’t function as desired for IPL=2 and causes incomplete query results. |
HCP-37426 | — | Attempting to perform DELETE and PUTCOPY simultaneously on an object results in "Non-replicating Irreparable objects detected" error message in SMC of HCP. |
HCP-37381 | — | AWS S3 protocol in race condition allowed both directory and file objects created with same pathname. Unlike AWS, HCP has a concept of directories. So the upper level directory cannot be also a file. |
HCP-37342 | — | An unexpected duplicate row in the per-object metadata table will cause node outages until the duplicate row is removed. |
HCP-37335 | — |
HCP product installation procedure may fail with the following error message if there is a USB drive or external DVD drive connected to the system when running the installation wizard: umount: /dev/sr0: umount failed: Invalid argument. This may occur in both VM and appliance configurations. Please disconnect all unnecessary USB drive and external DVD drives from the system, and retry the installation procedure, |
HCP-37247 | — | HCP systems running version 8.2 and later may experience network interface flapping and resetting of network adapters. This issue may be caused by a low-level defect in the kernel that causes a network interface to stop transmitting for several seconds, which leads to the interface resetting itself and self-recovering. In active-backup network interface configurations, this leads to a network interface failover within the corresponding front-end or back-end network bond. There is no noticeable impact to clients during this very short time interval. |
HCP-36798 | 01709881 |
SNMP returns the incorrect replication link name Workaround: Use the HCP Management API to return the correct replication link name. |
HCP-36744 | — | In rare circumstances, when HCP G11 operating system is installed on a node, the installation process may hang during making filesystems. This has typically been observed in SAN-attached configurations. This symptom occurs when HCP G11 detects that there appears to already be a filesystem on the volume, and the filesystem creation command is waiting for user input, but the prompt output by that command is not displayed on the console. If you are certain that the filesystem formatting procedure can continue (i.e., the volumes are mapped correctly, and all data on the volume can be destroyed), you can type in yes and press Enter, which should allow the procedure to continue. |
HCP-36632 | 01547564 | Multipart upload fails in the FileOpenForWriteIndex.suspendAndSwap
function and returns an Attempt to suspend and swap a multipart upload file
handle error. |
HCP-36001 | 01410508 | Node recovery during an online upgrade procedure targets a healthy node. |
HCP-35089 | 01426836 | Zero-copy failover failback might leave behind stall mount points. |
HCP-35027 | 01415199 | Migration finalization might timeout and require a restart. |
HCP-34993 |
01354829, 01331997 | Policy state of over 1 million objects causes node reboots. |
HCP-34982 | — |
In the HCP Search Console UI, the login ID changes to null and a subsequent
search returns When you open the Tenant Management Console from the System Management Console, initiate a search by logging in to the Search Console with your system-level credentials, and either refresh the page or click the search button, the following events occur:
If you log in to the Search Console again with your tenant-level credentials and initiate a search, the query returns the following error message: 500 Error: Internal server error Workaround:Set the Log users out if inactive for more than value to be the same on the System Management Console and Tenant Management Console. You can configure this value on the tab. |
HCP-34764 | 01309564 | After disabling CIFS on an HCP namespace, the Windows client connection remains active, and objects are written to the root (/) file system |
HCP-34516 |
01312806, 01310161 |
Overflowed, thin-provisioned block storage might cause data loss. Workaround: Do not over provision dynamic pools. |
HCP-34515 | 01312806 | Major capacity of the /var file system contains log downloads. |
HCP-34388 | 01224371 |
When a zero-copy-failover partner node reboots after a failover, the metadata query engine does not recover. Workaround: Edit the following files:
|
HCP-34207 | — | Faulty SSD drives can cause a failure when adding a new SSD volume to HCP. |
HCP-34203 | — | Capacity calculations and UI display are inconsistent between HCP and HCP S Series Node. |
HCP-33980 | — | Some metadata headers are processed inconsistently between AWS S3 and HCP. |
HCP-33541 | — | Active/passive replication link schedule does not adjust for systems located in different time zones. |
HCP-32957 | — | Metadata query engine with sort option causes Apache Solr Java Virtual Machine to run out of memory. |
HCP-32848 | — |
Delete old database procedure hangs. When administering namespaces with 100,000 objects or more, the Delete Old Database procedure is known to run indefinitely and display #, even though the deletion has completed. |
HCP-32555 | 00294339 | Watchdog timer causes premature soft lockup panic. |
HCP-32486 | — | The Active Directory allowlist filter is removed when the HCP System Management Console fails to update settings. |
HCP-32164 | — | Unable to change the name of an HCP S Series component in the HCP System Management Console. |
HCP-32018 | — | Migration hangs and produces inconsistent status information. |
HCP-31529 | — |
System restart fails after changing management network configuration. The HCP system should restart each time a change is made to the management network configuration, but after enabling the management network for the first time the HCP system does not restart again from changes made to management network configuration. |
HCP-31499 | — |
Inconsistent case sensitivity for Hitachi API for Amazon S3 multipart upload query parameters. Case sensitivity is inconsistent among the query parameters used with S3 compatible API requests related to multipart uploads. For example, the uploadId query parameter used in requests to upload a part is not case sensitive, while the uploadId query parameter used in requests to list the parts of a multipart upload or complete or abort a multipart upload is case sensitive. |
HCP-31488 | — |
System restart due to unavailable node not receiving management network IP address. If a node is unavailable when the management network is enabled, the node does not receive the management network IP address. If any other change is made to the management network, the HCP system shuts down so the node can receive the management network IP address. Workaround: Only enable the management network when all nodes are available. |
HCP-31431 | — |
Links in a geo-protection replication topology can be added to replication chain. Geo-protection replication chains are not supported. If a system in the geo-replication topology becomes unavailable, the geo-protected systems outside of the topology could experience data unavailability |
HCP-31400 | — |
Tar gzip compressed objects fail MD5 check due to Firefox browser issue. Tar gzip compressed objects downloaded from HCP through the Firefox browser fail the MD5 check. |
HCP-31112 | — | Objects left in "VALID, UNREPLICATABLE_OPEN" state and cannot be cleaned up by running garbage collection. |
HCP-30958 | — |
DNS failover fails due to domain name change in active/passive replication link. If a system is in an active/passive replication link and has its domain name changed, the replica system does not receive the updated domain name which causes DNS failover to fail. Workaround: After you change the domain name for the primary system, update any setting on the tenant overview page to replicate the new domain name. |
HCP-30058 | — | AWS S3 500 Internal Server Error due to double slash (//) in
object name If an object has a double slash (//) in its object name and the object is ingested using HS3, HCP returns a HTTP 500 internal server error. |
HCP-30018 | — |
Namespace browser cannot load directory due to ASCII characters in object name. The namespace browser cannot display the contents of a directory that contains an object with any of the following ASCII characters in its name: %00-%0F, %10-%1F, or %20. |
HCP-29645 | — |
AD falsely report missing SPNs due to replication topology with tenant or namespaces on custom network. In a replication topology where systems have full SSO support, HCP may incorrectly report missing SPN errors for replicating tenants and namespaces that are using a custom network with a non-default domain name. |
HCP-29301 | — | Database connections exhausted On high-load HCP systems that are balancing metadata, nodes can restart due to exceeding the database connection limit. |
HCP-25602 | — |
While the Migration service is running, the migration status occasionally shows incorrect values. Occasionally while the Migration service is running, the migration status values for the total number of bytes being migrated and the total number of objects being migrated are incorrect. This occurs regardless of how many bytes or objects are actually migrated. Once the migration completes, the migration status values become accurate. |
HCP-13183 | — |
SNMP version 2c traps sent for version 3 traps. HCP can be configured to use SNMP version 3. However, when configured this way, HCP sends version 2c traps instead of the expected version 3 traps. Workaround: To receive traps from HCP, have your SNMP application accept SNMP version 2c traps. |
HCP-8665 | — |
Shredding in SAIN systems. In SAIN systems, HCP may not effectively execute all three passes of the shredding algorithm when shredding objects. This is due to the fact that some storage systems make extensive use of disk caching. Depending on the particular hardware configuration and the current load on the system, some of the writes from the shredding algorithm may not make it from the cache to disk. |
HCP-7043 | — |
Displaying UTF-16-encoded objects. Objects with content that uses UTF-16 character encoding may not be displayed as expected due to the limitations of some browser and operating system combinations. Regardless of the appearance on the screen, the object content HCP returns is guaranteed to be identical to the data before it was stored. |
Accessing product documentation
Product user documentation is available on the Hitachi Vantara Support Website: https://knowledge.hitachivantara.com/Documents. Check this site for the most current documentation, including important updates that may have been made after the release of the product.
Getting help
The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information: https://support.hitachivantara.com/en_us/contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.