Content Platform 9.3.0 Release notes - Customer
Release highlights
HCP 9.3.0 includes several new features, described next. The release also fixes issues found in previous releases. For more information, see Issues resolved in this release.
This feature allows applications to write all data to a single directory or a smaller number of directories without impacting performance of the system.
HCP utilizes additional information about HCP-S nodes, such as CPU load and bandwidth in addition to available capacity before selecting S-Series node for data ingest.
With this release, HCP and S-Series System Management Console/Admin UIs display consistent information about utilized storage by the S- Series.
HCP checks the file sizes of the objects in a batch. If the batch exceeds the specified limit, HCP will not add more objects to the batch for replication. The System Management Console (SMC) now allows users to choose the network type (either LAN or WAN). For LAN, there is no limit to the batch file size. For WAN, the default limit is 10 GB.
HCP 9.3.0 introduces support and monitoring for a new network switch for the back-end network, Arista DCS-7020SR-24C2-R, a 24-port 10Gbps SFP+ network switch. This new switch can be ordered through the ordering system as a standard part of the HCP appliance.
HCP now ships with updated rack and PDU configurations. Documentation has been updated to reflect this change.
The HCP 9.3.0 release resolves several CVEs. For a list of these CVEs, see CVE Records resolved in this release.
As of HCP 9.3.0, TLS setting for new installations defaults to TLS 1.2. Upgrades from prior versions will retain the prior setting of TLS.
HCP VM can be installed both on HDD and SSD data store. Upgrading the ESXi hardware version and ESXi version concurrently was successfully qualified in HCP 9.3.0.
- HCP 9.3.0 removes support for the HSwift API.
- HCP 9.3.0 removes support for Hitachi CR210H and CR220S systems, as well as for Hitachi Unified Storage (HUS) 110 and 130 storage systems. HUS 150 and HUS VM storage systems are still supported. End-of-Service-Life (EOSL) for HUS 150 and HUS VM is June 30, 2021. The Extended EOSL for HUS 150 and HUS VM is December 31, 2021.
- As of HCP 9.3.0, HCP Data Migrator is no longer shipped as part of the Tenant Management Console. To obtain this tool, contact your Hitachi representative or customer support.
HCP versions 8.0 through 9.2 | HCP version 9.3 and later |
Logical ingest calculated by HCP for the S-Series Node. | Logical ingest calculated by HCP for the S-Series Node. |
Used physical bytes returned by the S-Series Node API, which might include capacity used by another HCP. |
|
As of HCP 9.3.0, the default minimum TLS level for new installations is TLS v1.2. For upgrades, the previous setting remains in place, and can be adjusted manually after the upgrade.
Upgrade notes
Upgrades to HCP 9.3.0 will fail if any of the namespaces have HSwift protocol enabled. Disable HSwift protocol on the relevant namespaces before upgrade. Support for HSwift protocol has ended with HCP 9.3.0.
If you try to replicate namespaces with unbalanced directory mode enabled to a pre-HCP 9.3.0 version cluster, the affected tenant on the replication link will be paused. The other tenants on the replication link will continue to replicate. It is recommended to upgrade all clusters in the replication topology to HCP 9.3.0 before using the unbalanced directory mode feature.
Upgrades to version 9.2.1 or later will fail if any service plans exist that have SMTP enabled and use direct write to HCP S Series Nodes as the primary ingest tier. Please modify these service plans before upgrading to version 9.2.1 or later. For more information, please contact your authorized HCP service provider.
If you attempt to replicate objects that contain a labeled retention hold to a pre-version 9.1 cluster, the affected tenant (all its replicated namespaces) on the replication link will be paused, while other tenants on the link continue to replicate. Therefore, it is recommended to upgrade all clusters in the replication topology to Version 9.1 before using the labeled retention hold feature.
You can upgrade an HCP system to version 9.x only from version 8.x. You cannot downgrade HCP to an earlier version.
You must have at least 32 GB of RAM per node to use new software features introduced in HCP version 9.x. While you can upgrade an HCP system to version 9.x with a minimum of 12 GB of RAM per node and receive the patches and bug fixes associated with the upgrade, the system cannot use the new software features in the release. Inadequate RAM causes performance degradation and can negatively affect system stability. If you have less than 32 GB RAM per node and would like to upgrade to this release, contact your Hitachi Vantara account team.
HCP upgrades can occur with the system either online or offline. During an online upgrade, the system remains available to users and applications. Offline upgrades are faster than online upgrades, but the system is unavailable while the upgrade is in progress.
Supported limits
HCP supports the limits listed in the following tables.
Hardware | Support limit |
Maximum number of general access, G Series Access Nodes | 80 |
Maximum number of HCP S Series Nodes | 80 |
Logical volume | Support limit |
Maximum number of SAN logical storage volumes per storage node | 63 |
Maximum logical volume size for SAN LUNs | 15.999 TB |
Internal storage | Support limit |
Maximum number of logical storage volumes per storage node RAIN | 4 |
Maximum logical volume size on internal drives | HDD capacity dependent |
Internal storage | Support limit |
Number of SSDs per storage node | 12 (front-cage only) |
Maximum logical volume size on internal drives | SSD capacity dependent |
Maximum number of SAN logical storage volumes per storage node (when SAN is attached to system) | 63 |
Maximum logical volume size for SAN LUNs (when SAN is attached to system) | 15.999 TB |
HCP VM systems — VMware ESXi | Support limit |
Maximum number of logical storage volumes per VM storage node | 1 OS LUN, 59 Data LUNs |
Maximum logical volume size | 15.999 TB |
HCP VM systems — KVM | Support limit |
Maximum number of logical storage volumes per VM storage node | 1 OS LUN
Data LUNs: Limited by the number of device slots available for LUNs in the VirtIO-blk para-virtualized storage back-end, which depends on the number of other devices configured for the guest OS that also use the VirtIO-blk back-end. In a typical HCP configuration, 17 slots are available. |
Maximum logical volume size | 15.999 TB OS LUN |
Data storage | Support limit |
Maximum active erasure coding topologies | 1 |
Maximum erasure coding topology size | 6 (5+1) sites |
Minimum erasure coding topology size | 3 (2+1) sites |
Maximum total erasure coding topologies | 5 |
Maximum number of objects per storage node | Standard non-SSD disks for indexes: 800,000,000
SSD for indexes: 1,250,000,000 |
Maximum number of objects per HCP system | 64,000,000,000 (80 nodes times 800,000,000 objects per node)
If using 1.9 TB SSD drives: 100,000,000,000 (80 nodes times 1,250,000,000 objects per node) |
Maximum number of directories per node if one or more namespaces are not optimized for cloud | 1,500,000 |
Maximum number of directories per node if all namespaces are optimized for cloud | 15,000,000 |
Maximum number of objects per directory |
By namespace type:
|
Maximum object size by protocol |
|
Hitachi API for Amazon S3: Minimum size for parts in a complete multipart upload request (except the last part) | 1 MB |
Hitachi API for Amazon S3: Maximum part size for multipart upload | 5 GB |
Hitachi API for Amazon S3: Maximum number of parts per multipart upload | 10,000 |
Maximum number of replication links | 20 inbound, 5 outbound |
Maximum number of tenants | 1,000 |
Maximum number of namespaces | 10,000 |
Maximum number of namespaces with the CIFS or NFS protocol enabled | 50 |
Maximum number of attachments per email for SMTP | 50 |
Maximum aggregate email attachment size for SMTP | 500 MB |
Maximum number of SMTP connections per node | 100 |
User groups and accounts | Support limit |
Maximum number of system-level user accounts per HCP system | 10,000 |
Maximum number of system-level group accounts per HCP system | 100 |
Maximum number of tenant-level user accounts per tenant | 10,000 |
Maximum number of tenant-level group accounts per tenant | 100 |
Maximum number of users in a username mapping file (default tenants only) | 1,000 |
Maximum number of SSO-enabled namespaces | ~1200 (SPN limit in Active Directory) |
Custom metadata | Support limit |
Maximum number of annotations per individual object | 10 |
Maximum non-default annotation size with XML checking enabled | 1 MB |
Maximum default annotation size with XML checking enabled | 1 GB |
Maximum annotation size (both default and non-default) with XML checking disabled | 1 GB |
Maximum number of XML elements per annotation | 10,000 |
Maximum level of nested XML elements in an annotation | 100 |
Maximum number of characters in the name of custom metadata annotation | 32 |
Maximum form size in POST object upload | 1,000,000 B |
Maximum custom metadata size in POST object upload | 2 KB |
Maximum number of SSO-enabled namespaces | ~1200 (SPN limit in Active Directory) |
Access control lists | Support limit |
Maximum size of access control entries per ACL | 1,000 MB |
Metadata query engine | Support limit |
Maximum number of content classes per tenant | 25 |
Maximum number of content properties per content class | 100 |
Maximum number of concurrent metadata query API queries per node | 5 |
Network | Support limit |
Maximum number of user-defined networks (virtual networks) per HCP system | 200 |
Maximum downstream DNS servers | 32 |
Maximum certificates and CSR per domain | 10 |
Storage tiering | Support limit |
Maximum number of storage components | 100 |
Maximum number of storage pools | 100 |
Maximum number of tiers in a service plan | 5 |
Miscellaneous | Support limit |
Maximum number of HTTP connections per node | 255 |
Maximum number of SMTP connections per node | 100 |
Maximum number of attachments per email for SMTP | 50 |
Maximum aggregate email attachment size for SMTP | 50 MB |
Maximum number of access control entries in an ACL | 1,000 |
Maximum number of labeled retention holds per object | 100 |
Supported clients and platforms
The following sections list clients and platforms that are qualified for use with HCP.
Windows clients
These Microsoft® Windows 32-bit or 64-bit clients are qualified for use with the HTTP v1.1, WebDAV, and CIFS protocols and with the Hitachi API for Amazon S3:
- Windows 7
- Windows 8
- Windows 2012 R2 (Standard and Data Center editions)
- Windows Server 2016 (Standard and Data Center editions)
- Windows 10
- AIX 7.1
- HP-UX 11i v3 (11.31) PA-RISC
- Itanium
- RHEL ES 6.10
- RHEL ES 7.0
Unix clients
These Unix clients are qualified for use with the HTTP v1.1, WebDAV, and NFS v3 protocols and with the Hitachi API for Amazon S3:
- HP-UX® 11i v3 (11.31) on Itanium®
- HP-UX 11i v3 (11.31) on PA-RISC®
- IBM AIX 7.1
- Red Hat® Enterprise Linux ES 6.10 and 7.0
Browsers
The table below lists the web browsers that are qualified for use with the HCP System Management, Tenant Management, and Search Consoles and the Namespace Browser. Other browsers or versions may also work.
Browser | Client Operating System |
Internet Explorer® 11
NoteInternet Explorer compatibility view mode may work, but is not supported by HCP.
|
Windows |
Mozilla Firefox® |
Windows HP-UX IBM AIX Red Hat Enterprise Linux Sun Solaris |
Google Chrome® |
Windows HP-UX IBM AIX Red Hat Enterprise Linux Sun Solaris |
*The Consoles and Namespace Browser work in Internet Explorer only if ActiveX is enabled. Also, the Consoles work only if the security level is not set to high. |
Client operating systems for HCP Data Migrator
These client operating systems are qualified for use with HCP Data Migrator.
- Microsoft 32-bit Windows:
- Windows XP Professional
- Windows 2003 R2 (Standard and Enterprise Server editions)
- Windows 2008 R2 (Standard and Enterprise Server editions)
- Windows 7
- Windows 8
- Windows 2012 (Standard and Datacenter editions)
- HP-UX 11i v3 (11.31) on Itanium
- HP-UX 11i v3 (11.31) on PA-RISC
- IBM AIX 7.1
- Red Hat Enterprise Linux ES 5 (32-bit)
- Red Hat Enterprise Linux ES 6.10 and 7.0 (64-bit)
- Sun Solaris 10 SPARC
- Sun Solaris 11 SPARC
Platforms for HCP VM
HCP VM runs on these platforms:
- VMware ESXi 6.5 U1 and U2
- VMware ESXi 6.7 U1, U2, and U3
- VMware ESXi 7.0 (qualified on hardware version 17)
- VMware vSAN 6.6
- VMware vSAN 6.7
- VMware vSAN 7.0
- KVM — qualified on Fedora 29 Core
Third-party integrations
The following third party applications have been tested and proven to work with HCP. Hitachi Vantara does not endorse any of the applications listed below, nor does Hitachi Vantara perform ongoing qualification with subsequent releases of the applications or HCP. Use these and other third party applications at your own risk.
Hitachi API for Amazon S3 tools
These tools are qualified for use with the Hitachi API for Amazon S3:
- CloudBerry Explorer (does not support multipart upload)
- CloudBerry Explorer PRO (for HCP multipart upload, requires using an Amazon S3 compatible account instead of a HCP account; for CloudBerry internal chunking, requires versioning to be enabled on the target bucket)
- s3curl
- S3 Browser
Mail servers
These mail servers are qualified for use with the SMTP protocol:
- Microsoft Exchange 2010 (64 bit)
- Microsoft Exchange 2013
- Microsoft Exchange 2016
NDMP backup applications
These NDMP backup applications are qualified for use with HCP:
- Hitachi Data Protection Suite 8.0 SP4 (CommVault® Simpana® 8.0)
- Symantec® NetBackup® 7 — To use NetBackup with an HCP system:
- Configure NDMP to require user authentication (that is, select either the Allow username/pwd authenticated operations or Allow digest authenticated operations option in the NDMP protocol panel for the default namespace in the Tenant Management Console).
- Configure NetBackup to send the following directive with the list of backup paths:
set TYPE=openPGP
Windows Active Directory
HCP is compatible with Active Directory on servers running Windows Server 2012 R2 or Windows Server 2016. In either case, all domain controllers in the forest HCP uses for user authentication must minimally be at the 2012 R2 functional level.
Supported hardware
The following sections list hardware that is supported for use in HCP systems.
Supported servers
These servers are supported for HCP systems with internal storage:
- HCP G11 (D52BQ-2U)
- HCP G10 (D51B-2U)
These servers are supported for HCP SAN-attached systems with internal storage:
- HCP G11 (D52BQ-2U)
- HCP G10 (D51B-2U)
Server memory
At least 32 GB of RAM per node is needed to use new software features introduced in HCP 9.x. An HCP system can be upgraded to version 9.x with a minimum of 12 GB of RAM per node, and receive the patches and bug fixes that come with the upgrade, but the system cannot use the new software features. Inadequate RAM causes performance degradation and can negatively affect system stability.
If you have less than 32 GB RAM per node and would like to upgrade to HCP 9.x, contact your Hitachi Vantara account team.
Supported storage platforms
These storage platforms are supported for HCP SAIN systems:
- Hitachi Virtual Storage Platform
- Hitachi Virtual Storage Platform G200
- Hitachi Virtual Storage Platform G400
- Hitachi Virtual Storage Platform G600
- Hitachi Virtual Storage Platform G1000
- Hitachi Virtual Storage Platform G1500
- Hitachi Virtual Storage Platform E990
- Hitachi Virtual Storage Platform 5000 series
Supported back-end network switches
The following backend network switches are supported in HCP systems:
- Alaxala AX2430
- Arista 7020SR-24C2-R
- Cisco® Nexus® 3K- C31128PQ-10GE
- Cisco® Nexus® 3K-C31108PC-V
- Cisco® Nexus® 5548UP
- Cisco® 5596UP
- Dell PowerConnect™ 2824
- ExtremeSwitching™ VDX® 6740
- ExtremeSwitching™ 210
- ExtremeSwitching™ 6720 - SAIN systems only
- HP 4208VL
- Ruckus ICX® 6430-24
- Ruckus ICX® 6430-24P HPOE
- Ruckus ICX® 430-48
Supported Fibre Channel switches
The following Fibre Channel switches are supported for HCP SAIN systems:
- Brocade 5120
- Brocade 6510
- Cisco MDS 9134
- Cisco MDS 9148
- Cisco MDS 9148S
Supported Fibre Channel host bus adapters
These Fibre Channel host bus adapters (HBAs) are supported for HCP SAIN systems:
- Emulex® LPe 32002-M2-Lightpulse
(for supported firmware and boot BIOS versions, refer to the G11 Hardware Tool set)
- Emulex® LPe 11002-M4
(firmware version 2.82a4, boot BIOS 2.02a1)
- Emulex® LPe 12002-M8
(firmware version 1.10a5, boot BIOS 2.02a2)
- Emulex® LPe 12002-M8 (GQ-CC-7822-Y)
(firmware version 1.10a5, boot BIOS 2.02a2)
- Hitachi FIVE-EX 8Gbps
(firmware version 10.00.05.04)
Issues resolved
Issues resolved in this release
The following table lists the issues resolved in HCP 9.3.0
Reference Number | SR Number | Description |
HCP-33571 | On a fresh install, TLS is set to v1.2 by default. On an upgrade setup, TLS was carried forwarded to the previous setting. | |
HCP-37042 | 01498581 | HCP 9.3.0 resolves a problem that caused replication in environments with large number of directories to exhibit slowness. |
HCP-37960 | If a PUT request through the Hitachi API for an S3 gateway request specifies the public-read-write canned ACL header and if the user making the PUT request does not have the DELETE namespace permission, the PUT will return a "400" response code. Make sure users have the DELETE permission if the public-read-write canned ACL is being used. | |
HCP-38050 | 02244326, 02554936, 02469184, 02680647, 01796940 | Failure to join to Active Directory for authenticated CIFS was causing node rolls. The node rolls have been eliminated. |
HCP-38772 | 02301695 | Continuous JVM Warning: WARNING at com.geophile.pool.Pool.take(Pool.java:207) Pool(GlobalConnectionPool, jdbc:postgresql:ris): Pool is at maximum capacity. Tuned the system with new settings for Global Connection Pool Maximum. |
HCP-38903 | HCP 9.3.0 added support for displaying information about the Support Access Credentials on the virtual console. This information is available once the HCP software stack is operational. | |
HCP-38930 | HCP 9.3.0 resolves an issue that previously resulted in the wrong error message showing when exclusive support access credentials were applied while a node was down. | |
HCP-39137 | In a SAN-attached HCP environment, with certain HM800 storage systems (G800 and F800), prior HCP releases did not have the correct information about port assignments, which, in certain configurations, resulted in error messages when HCP tried to query the storage. HCP 9.3.0 resolves the issue. | |
HCP-39167 | Added a new check box on the Internal Logs page in the SMC for collecting the HCR logs along with other HCP logs . As a result, HCR reports can now be downloaded along with logs. | |
HCP-39407 | 02436369 | In prior HCP releases, the add volume procedure could hang and fail to complete, which might occur if new LUNs were added to only a subset of the HCP nodes, and that subset did not include the HCP leader node. HCP 9.3.0 resolves this issue. |
HCP-39453 | 02475342, 02489498 | If a read-from-replica is issued after a file is replicated, where the replica IF is on s-series pool and does not succeed due to unavailability of nodes or file being corrupted, a node roll could occur on the replica due to an unhandled Exception. |
HCP-39459 | HCP 9.3.0 no longer displays a serial port-related error message on the console in HCP VM environments. | |
HCP-39531 | An incorrect alert stating that HCP DNS is required when using Active Directory has been removed. Active Directory may be used as long as you have the HCP system configured with either HCP DNS or a valid external DNS server. | |
HCP-39538 | HCP 9.3.0 resolves an issue that prevented upstream DNS servers from being configured when Enable DNS was set to No during product configuration. In addition, the configuration menu has been updated in the following ways to ensure that the correct terminology is used.
|
|
HCP-39544 | 02577144 | A null pointer exception occurred when an operation-based query encountered an object containing a retention class with incorrect syntax such as an invalid duration. HCP now prevents retention classes with invalid syntax from entering the system and therefore the null pointer exception will not occur. |
HCP-39594 | In prior HCP releases, in an HCP system that has G11 nodes, the G11 nodes' CPU1 sensor information is missing from the System Management Console's Hardware information page output. HCP 9.3.0 resolved the issue and the SMC now correctly displays the CPU1 sensor information. | |
HCP-39631 | 02559159 | HCP allowed HTTP OPTIONS method in port 8000 and 9090. In HCP 9.3.0, HCP blocked OPTIONS method for using ports 8000, 9090, and 8888. |
HCP-39637 | HCP 9.3.0 resolves an issue that caused the current database size to be calculated incorrectly during a Move Database to a Dedicated Volume service procedure, and thus HCP allowed a smaller dedicated database volume added than should have been possible based on guidance provided in the user documentation (the new dedicated database volume's size must be at least 1.5 times the size of the current database size). | |
HCP-39677 | Running the ScavengingPolicy service on a storage volume containing an attached S30 results in a NullPointerException . |
|
HCP-39823 | In an HCP G11 environment, HCP in prior releases may have incorrectly reported fan sensors with No Redundancy. HCP 9.3.0 resolves the issue. | |
HCP-39996 | 02703587 | When there is a race condition in Shredding service, other delete requests could not acquire the lock. |
HCP-40018 | 02072744 | Node roll happened at times because of unhandled Exception. |
HCP-40060 | 02706680 | When the compression exclude list entered from SMC has an invalid character, the JVM could roll. |
HCP-40239 | 02739803 | AWSv4 presigned URL expires in 15 minutes even if the expire time set longer. |
HCP-40307 | 02223916 | The documentation sections on various types of Node Recovery Procedures has been changed in various places to address inconsistencies and incorrect information. |
HCP-40476 | An attempt to simultaneously delete the same object via HS3 will no longer result in an "UNEXPECTED ERROR" error code being returned. All simultaneous delete requests will return "SUCCESS" if the object is deleted. | |
HCP-40638 | HCP timezone information has been updated to version 2021a to reflect recent changes in various time zones. | |
HCP-40717 | Starting with HCP 9.3.0, HCP Data Migrator is no longer included in the Tenant Management Console. If you require access to it, please contact your authorized service provider or Hitachi Vantara customer care. | |
HCP-40763 | 02436369 | HCP resolves an issue that, after an Online LUN addition procedure fails to complete, subsequent retry of the LUN addition procedure fails due to HCP determining the highest priority node incorrectly. |
CVE Records resolved in this release
HCP 9.3.0 release resolves the following CVEs, in addition to resolving several additional security weakness not associated with CVEs:
CVE Record Number | Hitachi Vantara reference number | Description |
CVE-2015-6644 CVE-2015-7940 CVE-2016-1000338 CVE-2016-1000339 CVE-2016-100034 CVE-2016-1000342 CVE-2016-1000343 CVE-2016-1000344 CVE-2016-1000345 CVE-2016-1000346 CVE-2016-1000352 CVE-2017-13098 CVE-2018-1000180 CVE-2018-1000613 CVE-2018-1000613 |
HCP-35690 | HCP upgraded the version of the BouncyCastle library. |
CVE-2013-2014 CVE-2013-4222 CVE-2013-6391 CVE-2014-0204 CVE-2014-3476 CVE-2014-3520 CVE-2014-3621 CVE-2015-3646 CVE-2015-7546 CVE-2018-14432 CVE-2018-20170 |
HCP-40102 | HCP 9.3.0 removed support for HSwift. |
CVE 2017-9805 CVE-2020-17530 |
HCP-39368 | HCP 9.3.0 upgraded the version of the Struts library. |
CVE-2015-1832 | HCP-35665, HCP-38967 | HCP no longer ships HCP Data Migrator. A version of HCP Data Migrator that resolved this CVE may be requested from Hitachi Vantara Support. |
CVE-2012-6708 CVE-2015-9251 CVE-2016-7103 CVE-2019-11358 CVE-2020-11022 CVE-2020-11023 |
HCP-37943 HCP-35680 HCP-35655 |
HCP 9.3.0 resolves these CVE by upgrading the jQuery library, and through additional code changes. |
CVE-2019-11065 CVE-2019-15052 CVE-2019-16370 CVE-2020-11979 CVE-2021-29428 CVE-2021-29429 |
HCP-35887 | HCP 9.3.0 upgraded the version of the Gradle library to resolve these CVEs. |
CVE-2017-5929 | HCP-35691 | HCP 9.3.0 upgraded the logback library. |
CVE-2018-1000873 CVE-2018-11307 CVE-2018-12022 CVE-2018-12023 CVE-2018-14718 CVE-2018-14719 CVE-2018-14720 CVE-2018-14721 CVE-2018-19360 CVE-2018-19361 CVE-2018-19362 CVE-2019-12086 CVE-2019-12384 CVE-2019-12814 CVE-2019-14379 CVE-2019-14439 CVE-2019-14540 CVE-2019-16335 CVE-2019-16942 CVE-2019-16943 CVE-2019-17267 CVE-2019-17531 |
HCP-35684 | HCP 9.3.0 upgraded the version of the Jackson-databind library. |
CVE-2011-2730 CVE-2013-4152 CVE-2013-4152 CVE-2013-6429 CVE-2013-7315 CVE-2013-7315 CVE-2014-0054 CVE-2014-0054 CVE-2014-1904 CVE-2016-9878 CVE-2016-9878 CVE-2018-1270 CVE-2018-1271 CVE-2018-1272 |
HCP-35683 | HCP 9.3.0 upgraded the version of spring_framework library. |
CVE-2015-6420 CVE-2017-15708 |
HCP-35673 | HCP 9.3.0 upgraded the version of commons_collection library. |
Compatibility issues introduced in HCP 8.2 or later
The following table lists the compatibility issues introduced in HCP v8.2 or later. The issues are listed in ascending order by reference number.
Ref. number | Description | Version introduced in |
HCP-33074 HCP-35329 |
In HCP v8.2, the HCP software was upgraded to Jetty v9. The upgrade introduces several security enhancements that might impact some deployments:
|
HCP v8.2 |
HCP-33583 | HCP now requires that the x-amz-date header value is within 15 minutes of when HCP receives the Hitachi API for Amazon S3 request. | HCP v8.2 |
HCP-33672 | HCP now validates x-amz-date headers on appropriate Hitachi API for Amazon S3 requests. |
HCP v8.2 |
HCP-35286 | HCP now sends the severity of the EventID/messages such as NOTICE, WARNING or ERROR to Syslog servers. | HCP v8.1 |
HCP-37063 | Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported. | HCP v8.2 |
HCP-37858 | Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported. | HCP v9.1 |
Known issues
The next table lists known issues in the current release of HCP. The issues are listed in order by reference number. Where applicable, the service request number is also shown.
Reference Number | SR Number | Description |
HCP-804 |
HCP Data Migrator can set the value of the hold parameter to true, but not to false HCP Data Migrator can be used to place an object on hold by updating the system metadata for the object to set the hold parameter to true. However, you cannot use the HCP Data Migrator to remove a hold from an object because the HCP Data Migrator cannot set the value of the hold parameter to false. |
|
HCP-5153 |
False log messages with the lowest numbered node addition. When a new node is added to an HCP system, a message about it is written to the system log. If the number of the new node is lower than that of any existing nodes, the same message is written for each existing node, as if it were newly added. |
|
HCP-5179 |
Browser caching When an object is added to a namespace, deleted, and then added again with the same name, it may appear to have the old content when viewed through a web browser. Workaround: To see the new content, clear the browser cache. Be sure to use the applicable browser option to do this rather than restarting the computer. |
|
HCP-7043 |
Displaying UTF-16-encoded objects Objects with content that uses UTF-16 character encoding may not be displayed as expected due to the limitations of some browser and operating system combinations. Regardless of the appearance on the screen, the object content HCP returns is guaranteed to be identical to the data before it was stored. |
|
HCP-7108 |
Node restart with cross-mapped storage In SAIN systems, if a cross-mapped node restarts while one of its physical paths to the storage system is broken, the node remains unavailable. Workaround: Fix the broken path and restart the node from the System Management Console. |
|
HCP-8385 |
Exposed internal mechanism for dead properties for collections HCP uses an internal mechanism for storing WebDAV dead properties for a collection. This mechanism entails the creation of a dummy object named .webdav_properties. This object is inappropriately:
If you are storing dead properties for collections, do not delete any .webdav_properties objects. |
|
HCP-8570 |
Missed log messages when no leader node exists Normally, one node in an HCP system is responsible for writing messages to the system log. This node is called the leader node. Rarely, brief periods occur during which no leader node exists (for example, because the leader node has failed and a new leader node has not yet been established). During such periods, messages for which the leader node is responsible are not written to the log. |
|
HCP-8665 |
Shredding in SAIN systems In SAIN systems, HCP may not effectively execute all three passes of the shredding algorithm when shredding objects. This is due to the fact that some storage systems make extensive use of disk caching. Depending on the particular hardware configuration and the current load on the system, some of the writes from the shredding algorithm may not make it from the cache to disk. |
|
HCP-9212 |
Log display skips messages When you page through a display of log messages in the System Management Console or Tenant Management Console, some messages may be skipped. This happens because the Console retrieves the next or previous group of messages based on the message timestamps. Each time you request a next page of messages, the Console starts the new page with the message with the next later timestamp from the last message on the current page. If a page boundary falls between multiple messages with the same timestamp, retrieving messages starting with the next timestamp skips the messages that come after the page break. The equivalent process happens when you request a previous page of message. As additional messages are added to the log, the page boundaries change, with the result that previously skipped messages reappear. |
|
HCP-9360 |
Browser pages for large directories You can view the contents of a namespace in a web browser through HTTP (default namespace only) or WebDAV. Some browsers, however, may not be able to successfully render pages for directories that contain a very large number of objects. |
|
HCP-11317 |
Using NFS to delete objects open for read Using NFS, if you try to delete an object that is currently open for read on the same client, HCP returns this error: Read-only file system. |
|
HCP-11667 |
Appending to objects on unavailable nodes If an object is open for append on a node that becomes unavailable, attempts to append to the object fail. |
|
HCP-12089 |
Cannot ingest very large email attachments HCP fails to ingest email attachments substantially greater than 400 MB. In such cases, the client receives a 221 return code. |
|
HCP-13183 |
SNMP version 2c traps sent for version 3 traps HCP can be configured to use SNMP version 3. However, when configured this way, HCP sends version 2c traps instead of the expected version 3 traps. Workaround: To receive traps from HCP, have your SNMP application accept SNMP version 2c traps. |
|
HCP-13574 |
WebDAV does not correctly list objects with custom metadata Namespaces can be configured to store WebDAV dead properties in custom-metadata.xml files. If regular custom metadata is stored for one or more objects in a directory before this configuration is set, subsequent WebDAV requests for listings of that directory fail with an XML parsing error. Workaround: Do not use custom-metadata.xml files to store WebDAV properties for an object if any objects in the same directory already have custom metadata. |
|
HCP-16516 |
Using Internet Explorer, cannot log in to HCP as a local user With Internet Explorer, if the Active Directory user account with which you’re currently logged into Windows is not an account that’s recognized by HCP and any of these applies, Internet Explorer displays a Connect window instead of the page with the link to the login page for the target interface:
If you enter credentials for an HCP user account in the Connect window, Internet Explorer returns an error message. Workaround: To access the target interface using an HCP user account, click on the Cancel button in the Connect window to display the page with the link to the login page for the target interface. |
|
HCP-18233 |
Changed computer account not added to all applicable groups in Active Directory When you enable HCP support for Active Directory, the HCP computer account you specify is automatically added to the groups in Active Directory that include the user account you specify. If you then remove the computer account from one or more of those groups and reconfigure Active Directory support with a new computer account, the new computer account is not automatically added to the groups from which the previous computer account was removed. Workaround: Do not remove the old computer account from the groups in Active Directory until after you have changed the computer account in HCP. If you have already removed the old computer account from one or more groups, resubmit the Active Directory configuration in HCP without changing the computer account. This puts that computer account back in the groups from which it was removed. When you subsequently change the computer account in HCP, the new computer account will be added to all the groups that include the user account. |
|
HCP-18352 |
HCP unresponsive after Active Directory cache cleared while Active Directory is unavailable If you clear the Active Directory cache while HCP cannot communicate with Active Directory, the HCP system becomes unresponsive for up to ten minutes. |
|
HCP-18654 |
No success or error message in response to action taken in Console Occasionally, the System Management Console and Tenant Management Console do not display any success or error messages in response to an action that results in a fresh display of the page on which the action was taken. |
|
HCP-19123 |
Objects incorrectly reported as irreparable or unavailable after data migration During a data migration, the migration service may incorrectly report one or more objects as irreparable or unavailable. After the data migration is complete, you can run the Content Verification service to clear these errors. |
|
HCP-19128 |
Downloads with HTTPS fail in Internet Explorer 9 With Internet Explorer 9, attempts to download files (such as chargeback reports and SSL certificates) from URLs that use SSL security (that is, URLs that start with HTTPS) fail. Workaround: In Internet Explorer 9:
|
|
HCP-20401 |
Node restart due to large element content in annotation While XML checking of custom metadata is enabled, if an annotation is added to an object where the content of an element in the annotation is very large, a node may restart itself. Workaround: Disable XML checking for the namespace that contains the object. |
|
HCP-20706 |
DPL 2 object object copies stored on same node If a DPL 2 namespace service plan is configured so that the namespace stores one object copy on primary running storage and the other copy on an external storage volume, both copies can be stored on the same node, which can cause the object unavailablility if the node fails. |
|
HCP-20827 |
Delayed read from replica when external storage unavailable In a replicated namespace, if the only copy of the data for an object is in external storage and that storage is unavailable, NFS and WebDAV requests for the object may time out for several tries before HCP retrieves the object from the replica. Workaround: Either bring the external storage back online, or retry the request in five minutes. |
|
HCP-21365 |
Alert about Active Directory connection with online HCP upgrade In an HCP system that’s configured to support Active Directory, during an online upgrade and for a short time after the upgrade is complete, the System Management Console may show an alert indicating a problem with support for Active Directory. This alert is most likely false and will go away on its own. If Active Directory authentication is working, the alert can be safely ignored. |
|
HCP-21056 |
Network interface event upon MTU change When you change the MTU for a network, the network interface may go down and then come back up on nodes that are Dell 1950 servers. |
|
HCP-22241 |
Username mappings are applied to Active Directory users of HCP namespaces For the default namespace, Active Directory user authentication is implemented through the use of a username mapping file that associates AD usernames with UIDs and GIDs. If an AD user included in the username mapping file also has access to an HCP namespace, the objects that the user stores in the HCP namespace have the UID and GID specified in the username mapping file. As a result, a user using CIFS for authenticated access who is included in the username mapping file or a user using NFS has access to such an object only if one of these is true:
Users using HTTP, HS3, WebDAV, CIFS for authenticated access who are not included in the username mapping file, or CIFS for anonymous access have access to such an object regardless of the object UID and GID. |
|
HCP-23012 |
Cannot use ssh -6 to connect to a node using a link local IPv6 address HCP does not support using SSH to connect to a node using its link local IPv6 address. This issue is caused by Red Hat bug 719178: Applications can't connect to IPv6 link-local addresses learned through nss-mdns and Avahi. |
|
HCP-23070 |
False alerts for network with same name as deleted network If you create a network with at least one node IP address, then delete the network, and then create a new network with the same name as the deleted network and no node IP addresses, the Overview, Hardware, Storage Node, and Networks pages in the System Management Console display alerts indicating that a network error exists. Additionally, HCP writes this message to the system log: Network interface bond0.xxxx for network network-name is not functioning properly. When you subsequently assign IP addresses for the network to one or more nodes, the alerts disappear. |
|
HCP-23881 |
Nodes may fail with the error message “Max Connections hit: Could not get a connection, pool is exhausted” Even if the supported limit of 200 connections is not reached, if too many clients attempt to connect to the same namespace at the same time, one or more nodes in the HCP system may fail with the error message, “Max Connections hit: Could not get a connection, pool is exhausted”. Workaround: Upgrade to release 7.1 or later of HCP and increase your system RAM on all nodes. At least 32GB of RAM needs to be added. |
|
HCP-24155 |
When performing an add-drives procedure on an HCP node, an existing node sometimes issues a “barrierWait” message and then hangs When performing an add-drives procedure on an HCP node, if one node fails, its partner node may issue a "Waiting for others at barrierWait" message and then hang. Workaround: To get the existing node back into a working state, press CtrlC to cancel the drive addition procedure. You can then restart the procedure. |
|
HCP-24156 |
Cannot use a domain name to connect to a namespace on an IPv6 or dual-mode HCP network HCP-DM does not support the use of IPv6 addresses to connect to a namespace on an HCP system. HCP-DM can use IPv4 addresses to connect to a namespace on a dualmode HCP network. However, if HCP-DM tries to use a domain name to connect to a namespace on a dual-mode network, when the DNS resolves the domain name, it will return both IPv6 and IPv4 addresses for the network. If HCP-DM then tries to use the IPv6 addresses to connect to the namespace, the connection will fail. Workaround: To ensure that HCP-DM can successfully connect to a namespace on a dual-mode HCP network, you need to configure HCP-DM to connect to that namespace using the IPv4 addresses for the network. |
|
HCP-24436 |
Clearing the AD cache causes inconsistent directory permissions Following a clearing of the AD cache on an HCP system that’s accessing AD over CIFS, when users access a given CIFS share, they will find that their file permissions have changed to root/root, with the exception of the first user to access the share following the clearing of the AD cache. That user will see the original permissions on his/her file/folders, but all others will be root/root. All other users that connect to the CIFS share will only see root/root for existing files/folders. |
|
HCP-24472 |
When using AD for HCP authentication, if a user has an AD username that includes a % character, that user cannot access the HCP system If you attempt to log into the HCP System Management Console or Tenant Management Console using an Active Directory username that includes a % (percent) character, the HCP user authentication fails. |
|
HCP-24589 |
When attempting to update annotations for objects that have been tiered to one or more types of cloud storage, HCP sometimes returns a 503 error When a HCP attempts to update annotations on objects that have been tiered to cloud storage, the updates will fail with a 503 error if HCP is unable to connect to the applicable cloud storage service endpoints or if HCP is unable to access the applicable cloud storage buckets, containers, or namespaces. Workaround: Restore the connections between HCP and each applicable cloud storage service endpoint and make sure HCP can successfully access each applicable cloud storage bucket, container, and namespace. You should then be able to successfully update the annotations for any objects stored in each bucket, container, and namespace. |
|
HCP-24864 |
Hitachi Device Manager cannot send updates to an HCP system with IPv6 only mode enabled An HCP system with IPv6 only mode enabled can successfully connect to the Hitachi Device Manager server, but cannot receive Hitachi Device Manager updates. |
|
HCP-24887 |
When performing the TrueCopy storage system replication procedure, the san_update command may fail When using the TrueCopy procedure to replicate the HCP system OS and data LUNs to a different storage system, the san_update command may fail with an error that the file system on the source system differs from the file system on the second. |
|
HCP-25388 |
While HTTPS is enabled, HCP S Series Nodes fail to create storage components when added to the HCP system by virtual IP An S Series Node cannot use HTTPS when being added by virtual IP to HCP. While HTTP is enabled, HCP does not create storage components for S Series Nodes added by virtual IP address. Workaround: On the System Management Console, when adding an S Series Node through virtual IP, go to the Connection tab of the Add Node wizard, deselect the Use HTTP for management option and, under the Advanced panel, deselect the Use HTTPS for data access option before completing the add node procedure. |
|
HCP-25595 |
Pausing or failing an NFS write operation may cause HCP system processes to hang Pausing or failing an NFS write operation increases the chances of HCP system processes hanging. |
|
HCP-25602 |
While the Migration service is running, the migration status occasionally shows incorrect values Occasionally while the Migration service is running, the migration status values for the total number of bytes being migrated and the total number of objects being migrated are incorrect. This occurs regardless of how many bytes or objects are actually migrated. Once the migration completes, the migration status values become accurate. |
|
HCP-25697 |
AD 100 Winbind error occasionally causes HCP nodes to restart HCP system communication errors with AD may cause winbind to restart. If this happens more than 100 times, the HCP system restarts. |
|
HCP-25731 |
Upgrade NTP to fix vulnerabilities The NTP that HCP currently uses was discovered to have some vulnerabilities. For information about these vulnerabilities, refer to the NTP security advisory document found here. |
|
HCP-25761 |
lf your HCP system has data tiered to public cloud, the upgrade process to version 7.1 of HCP is extended If your HCP system has data tiered to public cloud, metrics need to recompute when upgrading to version 7.1 of HCP. This extends the upgrade time. |
|
HCP-25997 |
Node recovery does not work for the HCP 500XL with new disks Node recovery procedures fail with unformatted database drives. |
|
HCP-26037 |
The Adding Logical Volumes service might fail if adding previously used, formatted LUNs Occasionally during the add LUN service procedure, previously used, formatted LUNs might not be added to all nodes. If this occurs, the error message, "Failed to execute Partx" appears. Workaround: Restart the service procedure. |
|
HCP-26043 |
Incorrectly shutting down and restarting a replication link when updating the signed certificate causes the replication link to fail If you incorrectly shut down and restart a replication link while uploading an SSL certificate, the replication link refutes the certificate and fails. Workaround: Follow this certificate upload procedure:
|
|
HCP-26058 |
Upgrading to HCP 7.2 or later prevents HCP from connecting to HCP Data Migrator Release 7.2 and later of HCP use a different SSL cipher than previous releases. HCP Data Migrator does not support these ciphers if HCP Data Migrator is run with an outdated Java runtime. |
|
HCP-26066 |
Log download fails under certain conditions Log download initiated through the System Management Console could fail due to external issues such as networking. Workaround: Restart the log download. |
|
HCP-26127, HCP-26128 |
HTTPS certificate errors appear during failover Sending HTTPS requests to system A in a replication link that has failed over causes report certificate errors because the Subject Common Name in the certificate does not match the domain name in the request. Workaround: Add Subject Alternative Name entries to the certificates used by HCP for HTTPS. |
|
HCP-26158 |
Under specific conditions, creating an active/active replication link between two systems causes nodes to reboot Under specific conditions, creating a replication link between two systems running version 7.1.1 of HCP that have ingested objects and have multiple namespaces causes nodes to reboot. |
|
HCP-26775 |
Cannot download certificates from HCP through Internet Explorer 8 When you try to download a certificate from HCP using Internet Explorer (IE) 8, you may receive the "Unable to download." error message. This is caused by a known I.E 8 issue. Workaround: For more information on this issue, see the Microsoft Knowledge Base article, https://support.microsoft.com/enus/kb/323308. |
|
HCP-27121 |
Cannot download HCP system logs during an online upgrade to release 7.2 of HCP During an online upgrade to release 7.2 of HCP, the HCP system logs cannot be downloaded from the System Management Console. Workaround: During the online upgrade, access the System Management Console by entering the IP address of a node that has already upgraded into your web browser. Perform the log download procedure through the targeted node. |
|
HCP-27176 |
The Network page Advanced Settings tab appears blank when the HCP system is read only When an HCP system is in a read only state, the Advanced Settings tab on the System Management Console Configuration Networks page appears blank. |
|
HCP-27737 |
HCP system raises full capacity alarm if a single volume is over 95% full If a single volume in an HCP system becomes 95% full, the full file system warning is triggered for the system. |
|
HCP-27757 |
Active Directory node account not removed when node retired Running the retire node procedure on a node with Active Directory enabled does not remove the node computer account from the domain controllers. Workaround: Remove the node computer account from the domain controllers. |
|
HCP-27810 |
When switching tabs during a replication schedule update, an incorrect error message is occasionally displayed After creating an active/active replication link, clicking on the Update Schedule button on the System Management Console page, and switching between the local and remote schedule tabs, an error may appear even though the replication link is working properly. |
|
HCP-27882 |
After upgrade to 7.2, some third party applications receive HTTP 401 error to PUT requests With release 7.2 of HCP, SPNEGO changes make certain third party applications incompatible with HCP. Workaround: Contact Hitachi Vantara Support to enable third-party compatibility. |
|
HCP-29573 |
Changing HCP VM network adapter from e1000 to VMXnet3 causes VLAN performance issues On an HCP VM with VLANs enabled, converting from an e1000 to VMXnet3 network adapter causes VLAN performance issues. |
|
HCP-29612 HCP-29301 |
Database connections exhausted On high-load HCP systems that are balancing metadata, nodes can restart due to exceeding the database connection limit. |
|
HCP-29645 |
AD falsely report missing SPNs due to replication topology with tenant or namespaces on custom network In a replication topology where systems have full SSO support, HCP may incorrectly report missing SPN errors for replicating tenants and namespaces that are using a custom network with a non-default domain name. |
|
HCP-30018 |
Namespace browser cannot load directory due to ASCII characters in object name The namespace browser cannot display the contents of a directory that contains an object with any of the following ASCII characters in its name: %00-%0F, %10-%1F, or %20. |
|
HCP-30058 |
HS3 500 Internal Server Error due to double slash (//) in object name If an object has a double slash (//) in its object name and the object is ingested using HS3, HCP returns a HTTP 500 internal server error. |
|
HCP-30529 |
Irreparable objects appearing due to migrating objects ingest in HCP release 5.0 to new nodes in HCP release 7.0 or later If objects with custom metadata are ingested in a system on HCP release version 5.X or earlier and the system is chain upgraded to release 7.0 or later and the objects are migrated to new nodes, the objects become irreparable. Workaround: Run the Content Verification service between each upgrade in the upgrade chain. |
|
HCP-30649 |
CR220S system installation or upgrade fails in HCP release version 8.0 due to active processor set to one core in BIOS If an HCP system uses CR220S servers and has the active processor setting set to one core in the BIOS, the installation or upgrade procedure fails for HCP release version 8.0. Workaround: Before installing or upgrading an HCP system, set the active processor setting in the BIOS to max cores. |
|
HCP-30765 |
Replication link shutdown error remains after replication resumes If all replication links shut down, HCP shows a "Replication Links Shut Down - All activity on all links to and from this system has been stopped" error message on the page. When the replication links resume, the error message does not go away. Workaround: Once all replication links resume, create a new replication link to make the error message go away. |
|
HCP-30958 |
DNS failover fails due to domain name change in active/passive replication link If a system is in a active/passive replication link and has its domain name changed, the replica system does not receive the updated domain name which causes DNS failover to fail. Workaround: After you change the domain name for the primary system, update any setting on the tenant overview page to replicate the new domain name. |
|
HCP-31061 |
HCP vulnerable to brute-force password detection attacks With an HTTP-based interface (that is, the HTTP REST, HS3, HSwift, and management APIs), if you authenticate using an HCP user account, HCP does not lock out the user account after multiple failed attempts to access the system. Similarly, HCP does not lock out HCP user accounts after multiple failed attempts to change the account password. Because accounts are not locked out under these circumstances, HCP is vulnerable to brute-force password detection attacks. Note: With Active Directory authentication, AD lockout policies enforce account lockouts. |
|
HCP-31082 |
Node restarts slowed due to configuring 32 or more LUNs on CR220 nodes with AMS 2500 storage systems. For an HCP system with CR220 nodes and AMS2500 storage systems, configuring 32 or more LUNs causes simultaneous node restarts to take longer than normal. The restarts take longer for every extra LUN after LUN 31. |
|
HCP-31097 |
DNS failover not working for replication link converted from active/passive to active/active After a replication link is converted from active/passive to active/active, DNS failover no longer works for that link. Workaround: Delete the active/passive replication link and then recreate it as an active/active link. |
|
HCP-31112 | Objects left in "VALID, UNREPLICATABLE_OPEN" state and cannot be cleaned up by running garbage collection | |
HCP-31400 |
Tar gzip compressed objects fail MD5 check due to Firefox browser issue Tar gzip compressed objects downloaded from HCP through the Firefox browser fail the MD5 check. |
|
HCP-31431 |
Links in a geo-protection replication topology can be added to replication chain Geo-protection replication chains are not supported. If a system in the geo-replication topology becomes unavailable, the geo-protected systems outside of the topology could experience data unavailability |
|
HCP-31488 |
System restart due to to unavailable node not receiving management network IP address If a node is unavailable when the management network is enabled, the node does not receive the management network IP address. If any other change is made to the management network, the HCP system shuts down so the node can receive the management network IP address. Workaround: Only enable the management network when all nodes are available. |
|
HCP-31499 |
Inconsistent case sensitivity for Hitachi API for Amazon S3 multipart upload query parameters Case sensitivity is inconsistent among the query parameters used with S3 compatible API requests related to multipart uploads. For example, the uploadId query parameter used in requests to upload a part is not case sensitive, while the uploadId query parameter used in requests to list the parts of a multipart upload or complete or abort a multipart upload is case sensitive. |
|
HCP-31529 |
System restart fails after changing management network configuration The HCP system should restart each time a chang is made to the management network configuration, but after enabling the management network for the first time the HCP system does not restart again from changes made to management network configuration. |
|
HCP-31721 HCP-29790 |
Duplicate elimination service cannot deduplicate compressed objects in HCP S Series storage pools When duplicate objects are compressed in an HCP S Series pool, the compression process creates unique files that the duplicate elimination service cannot deduplicate. |
|
HCP-31841 |
IPMI v2.0 password hash disclosure A vulnerability regarding IPMI v2.0 puts password protection at risk. For more information on this, see https://nvd.nist.gov/vuln/detail/CVE-2013-4786. |
|
HCP-31972 |
Only newly-added disks should be verified in HCP system prechecks During the HCP system prechecks in the add drive procedure, the disk size of only the newly-added LUN should be verified by the HCP system. |
|
HCP-32018 | Migration hangs and produces inconsistent status information | |
HCP-32164 | Unable to change the name of an HCP S Series component in the HCP System Management Console | |
HCP-32417 | System restart is required when adding, removing, and then reading a management port network adapter to an HCP VM system | |
HCP-32486 | The Active Directory allowlist filter is removed when the HCP System Management Console fails to update settings. | |
HCP-32555 | 00294339 | Watchdog timer causes premature soft lockup panic |
HCP-32818 |
Complete multipart upload operation failure due to generated ETAG The ETAG generated by HCP on a complete multipart upload operation is based on the default namespace hash scheme. This causes complete multipart upload operations to fail with TSM. Workaround: Use MD5 as the hash scheme for writing to a namespace with TSM. |
|
HCP-32819 |
AWS SDK failure due to invalid Content-Type When an invalid Content-Type request header is specified, this causes the AWS SDK to fail. |
|
HCP-32845 |
Incorrect information included in object configuration files In certain object configuration files, incorrect information is included. |
|
HCP-32848 |
Delete old database procedure hangs. When administering namespaces with 100,000 objects or more, the Delete Old Database procedure is known to run indefinitely and display #, even though the deletion has completed. |
|
HCP-32856 |
Search Security Deny List not working correctly In the HCP System Management Console, the Deny List on the Search Security page does not deny access to the clients listed. |
|
HCP-32900 |
In the HCP System Management Console, the Hardware page does not report the status of the management NIC when the NIC has been enabled and subsequently removed on the VM node If the link status of the management port network connection fails on a VMware ESXi server or KVM host, HCP cannot detect the link failure and does not raise any corresponding alarms. There should not be complete isolation from HCP management access since there are at least three other nodes providing management port network connectivity. |
|
HCP-32957 | Metadata query engine with sort option causes Apache Solr Java Virtual Machine to run out of memory | |
HCP-33048 | Issues occur when non-ASCII characters are used for tenant names.
Workaround: Use ASCII characters for tenant names. |
|
HCP-33427 |
After upgrading to HCP 8.2, some third-party applications receive an HTTP 401 error to PUT requests With release 7.2 of HCP, SPNEGO changes make certain third-party applications incompatible with HCP. Workaround: Contact Hitachi Vantara Support to enable third-party compatibility. |
|
HCP-33527 | 00830182 | Garbage collection fails on objects with internal references that no longer exist. |
HCP-33541 | Active/passive replication link schedule does not adjust for systems located in different time zones | |
HCP-33980 | Some metadata headers are processed inconsistently between AWS S3 and HCP | |
HCP-34114 | When using the S3 API, entering non-ASCII characters as the property name for custom metadata for an object returns a 400 error.
Workaround: Use ASCII characters in the property name. |
|
HCP-34203 | Capacity calculations and UI display are inconsistent between HCP and HCP S Series Node | |
HCP-34207 | Faulty SSD drives can cause a failure when adding a new SSD volume to HCP | |
HCP-34222 | 01219400 | During an online upgrade, the event-based retention field is left empty for certain namespaces |
HCP-34333 |
01247729 01247736 |
In an HCP cluster that contains an HCP S Series storage component, when an outage in the cluster leader node occurs, communication with the storage component fails, and HCP reports an error |
HCP-34388 | 01224371 |
When a zero-copy-failover partner node reboots after a failover, the metadata query engine does not recover Workaround: Edit the following files:
|
HCP-34515 | 01312806 | Major capacity of the /var file system contains log downloads |
HCP-34516 |
01312806 01310161 |
Overflowed, thin-provisioned block storage might cause data loss Workaround: Do not over provision dynamic pools. |
HCP-34764 | 01309564 | After disabling CIFS on an HCP namespace, the Windows client connection remains active, and objects are written to the root (/) file system |
HCP-34982 |
In the HCP Search Console UI, the login ID changes to null and a subsequent search returns "500 Error: Internal server error" When you open the Tenant Management Console from the System Management Console, initiate a search by logging in to the Search Console with your system-level credentials, and either refresh the page or click the search button, the following events occur:
If you log in to the Search Console again with your tenant-level credentials and initiate a search, the query returns the following error message: 500 Error: Internal server error Workaround: Depending on the circumstances that led to this error, complete the first or both of the following steps:
|
|
HCP-34993 |
01354829 01331997 |
Policy state of over 1 million objects causes node reboots |
HCP-35027 | 01415199 | Migration finalization might timeout and require a restart |
HCP-35089 | 01426836 | Zero-copy failover failback might leave behind stall mount points |
HCP-36001 | 01410508 | Node recovery during an online upgrade procedure targets a healthy node |
HCP-36632 | 01547564 | Multipart upload fails in the FileOpenForWriteIndex.suspendAndSwap function and returns "Attempt to suspend and swap a multipart upload file handle" error |
HCP-36744 | In rare circumstances, when HCP G11 operating system is installed on a node, the installation process may hang during making filesystems. This has typically been observed in SAN-attached configurations. This symptom occurs when HCP G11 detects that there appears to already be a filesystem on the volume, and the filesystem creation command is waiting for user input, but the prompt output by that command is not displayed on the console. If you are certain that the filesystem formatting procedure can continue (i.e., the volumes are mapped correctly, and all data on the volume can be destroyed), you can type in yes and press Enter, which should allow the procedure to continue. | |
HCP-36798 | 01709881 |
SNMP returns the incorrect replication link name Workaround: Use the HCP Management API to return the correct replication link name. |
HCP-37247 | HCP systems running version 8.2 and later may experience network interface flapping and resetting of network adapters. This issue may be caused by a low-level defect in the kernel that causes a network interface to stop transmitting for several seconds, which leads to the interface resetting itself and self-recovering. In active-backup network interface configurations, this leads to a network interface failover within the corresponding front-end or back-end network bond. There is no noticeable impact to clients during this very short time interval. | |
HCP-37335 |
HCP product installation procedure may fail with the following error message if there is a USB drive or external DVD drive connected to the system when running the installation wizard: umount: /dev/sr0: umount failed: Invalid argument. This may occur in both VM and appliance configurations. Please disconnect all unnecessary USB drive and external DVD drives from the system, and retry the installation procedure, |
|
HCP-37342 | An unexpected duplicate row in the per-object metadata table will cause node outages until the duplicate row is removed. | |
HCP-37381 | HS3 protocol in race condition allowed both directory and file objects created with same pathname. Unlike AWS, HCP has a concept of directories. So the upper level directory cannot be also a file. | |
HCP-37395 | If HCP fails to join an Active Directory, it can leave a breadcrumb in the related config file that gets it into inconsistent state. This causes continuous restarts of HCP. | |
HCP-37426 | Attempting to perform DELETE and PUTCOPY simultaneously on an object results in "Non-replicating Irreparable objects detected" error message in SMC of HCP. | |
HCP-37502 | 01789465
01789463 |
ATR Finalize Migration fails with "No space left on device". This is for HCP500 system with boot from SAN - they are replacing one storage system with another. In order to complete migration - arc-deploy tries to copy from LUN #0 to LUN#128. On older HCP500 system the /boot partition is only 128MB in size. |
HCP-37631 | Removing a tenant from Active-Passive replication link on the Active site followed by its deletion from the Passive site can result in lost data if in the future Active/Active replication link is created between the same clusters and the same tenant is added to it. | |
HCP-37695 | JVM logs have customer password in clear text. | |
HCP-37696 | 01612339 | MQE shard / solr core balancing doesn’t function as desired for IPL=2 and causes incomplete query results. |
HCP-37753 | HCP system goes in read-only state because of node rolls due to metadata manager not starting up. System could even look unstable.
Workaround: Reboot the system. |
|
HCP-37754 | No escalation information has been seen internally and at partners.
"HCP installed in an ESXi environment may display the following FSTRIM error message on the System Management Console: "Failure encountered attempting to trim volumes on nodes:", and an error with Event ID 2818 is listed in the error log under Major Events. Please contact Hitachi Vantara customer support if you encounter this error message. |
|
HCP-37778 | After upgrade of an HCP system is completed, the System Management Console Hardware page may display Initializing status for some of the logical volumes. This is the result of the device SMART error log containing records of error. Please contact Hitachi Vantara technical support to identify the error condition and the corrective action to resolve the symptom. | |
HCP-37810 | When provisioning rear-cage SSD to the HCP cluster on a subset of nodes in a SAN-attached G10 or G11 configuration, the service procedure tries to add rear-cage SSD on both nodes that comprise a Zero-Copy-Failover (ZCF) pair, even if one of those nodes does not have rear-cage SSD to be provisioned. This leads to error in the service procedure. As a work-around, ensure that you provision rear-cage SSDs either for both nodes that comprise a ZCF pair, or simultaneously for all nodes in the cluster. | |
HCP-37812 | 0194514801974470
01920608 |
HCP system goes in read-only state because of node rolls due to metadata manager not starting up. System could even look unstable.
Workaround: Reboot the HCP system. |
HCP-37851 | Starting with release HCP 8.2, all units of systemd-tmpfiles service log errors messages in /var/log/messages on a daily periodicity. The log messages are similar to the following:
systemd-tmpfiles[29354]: [/usr/lib/tmpfiles.d/mdadm.conf:1] Line references path below legacy directory /var/run/, updating /var/run/mdadm → /run/mdadm; please update the tmpfiles.d/ drop-in file accordingly. Initial investigation suggests that these error messages cause no functional error symptoms in HCP. |
|
HCP-37909 | Inline GC on DELETE API causes large latencies. This could cause data deletion caused by DELETE operation overlapping with PUT.
Workaround: Disable inline GC using a support call. |
|
HCP-37935 | While troubleshooting the progress of replication, the replication link in the overview page increases when a tenant is paused. The increase in pending data is similar to the total size of the paused namespaces. | |
HCP-37980 | During ATR/migration from legacy HCP systems like HCP 500XL (End of support in Nov 2020), finalize migrate to LUN128 does not configure grub.cfg correctly. | |
HCP-38039 | Disk level resource metrics are not reported properly. | |
HCP-38045 | 01946520 | During replication, the replica JVM keeps rolling due to the missing root directory. |
HCP-38048 | Service clearPolicyState does not clear rows that have no matching external_file entries. |
|
HCP-38050 | 02244326,
02554936, 02469184, 02680647, 01796940 |
Failure to join to AD for authenticated CIFS was causing node rolls. As a result, the node rolls have been eliminated. |
HCP-38505 | HCP appears to send the correct error code, but is inconsistent with AWS in that the size check should occur earlier than it does. As a result, HCP sends a 400 error code rather than sending a 200 error code during the keep-alive procedure. | |
HCP-38155 | 02090989 | Resetting advanced settings for an HCP S Series storage component does not work. |
HCP-38408 | 02155007 | ntpd tries to bind to usb0 network interface on HCP 9.x G11 system and causes time synchronization issues.
Workaround: On each node, prevent the driver from loading by denylisting in /etc/modprobe.d/aos.conf (that is, add/append the following lines in /etc/modprobe.d/aos.conf: blacklist cdc_ether blacklist usbnet |
HCP-38505 | When sending multipart upload requests with incorrect part size, the HCP response code returned is 200 not 400. The error is returned in the response body. | |
HCP-38574 | When the SPOCC max threads limit is reached, it could cause nodes to roll. | |
HCP-39045 | Space occupied by old object versions is not freed by the Garbage Collection service if the object is in a replicated namespace and the replication link is suspended. If feasible, delete the replication link or remove the namespace from replication to work around the issue. | |
HCP-39465 | Objects cannot be deleted using the namespace browser when logged in as an anonymous user. Log in as an authenticated user to delete objects when using the namespace browser. | |
HCP-39798 | 02639142 | Solr does not create proper indexing when user ingests a custom metadata containing format other than "Pretty formatted XML." Therefore, annotations with a single line of XML are not parsed properly when doing phrase searches. |
HCP-39876 | 02673882 | In a SAN-attached HCP environment, storage addition procedure may fail, indicating that the procedure fails because of a device mapper name of mpathb (or other mpath device) cannot be formatted. |
HCP-40026 | 02710878 | During an online upgrade procedure, simultaneously rebooting certain combinations of nodes (lowest node plus at least one other) can cause premature upgrade of metadata database tables. When the update script arrives later, expecting to upgrade the table, it finds it already done and exits. This issue has been corrected for upgrades from HCP 9.3.0 and later, but it can occur when upgrading to any release prior to and including HCP 9.3.0.
If this condition occurs, the symptom can be remediated by performing node recovery after the failed online upgrade. |
HCP-40505 | Manually started execution of a service is not persistent. It can be interrupted by the scheduled service or a node event such as a reboot. | |
HCP-40603 | 02551671 | In a SAN-attached environment, if a SAN path experiences a failure, HCP may incorrectly generate an alert that the internal volume has failed, even though the internal volumes are in a healthy state. If Hitachi Remote Ops is configured to monitor the cluster, this alert will also percolate to HRO. |
Accessing product documentation
Product user documentation is available on the Hitachi Vantara Support Website: https://knowledge.hitachivantara.com/Documents. Check this site for the most current documentation, including important updates that may have been made after the release of the product.
Getting help
The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information: https://support.hitachivantara.com/en_us/contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.
Comments
Please send us your comments on this document to doc.comments@hitachivantara.com. Include the document title and number, including the revision level (for example, -07), and refer to specific sections and paragraphs whenever possible. All comments become the property of Hitachi Vantara LLC.
Thank you!