Skip to main content
Hitachi Vantara Knowledge

Content Platform 9.3.6 Release notes - Customer

About this document

This document contains release notes for Hitachi Content Platform 9.3.6.

NoteThe HCP v9.3.2 software was not released. As a result, the corresponding HCP v9.3.2 release notes were not published.

Release highlights

The HCP 9.3.6 release is a maintenance release that resolves several software issues observed in prior HCP releases, addresses several CVE vulnerabilities, and delivers hardware support for the following hardware appliance items:

  • Support for Cisco N9K-C93180YC-FX3 back-end network switch
  • Support for Intel Dual Port 10GigE SFP+ Ethernet X710-DA2 network adapter for HCP G10 and G11 appliance systems

Upgrade notes

Upgrades to HCP 9.3.6 will fail if any of the namespaces have HSwift protocol enabled. Disable HSwift protocol on the relevant namespaces before upgrade. Support for HSwift protocol has ended with HCP 9.3.0.

If you try to replicate namespaces with unbalanced directory mode enabled to a pre-HCP 9.3.0 version cluster, the affected tenant on the replication link will be paused. The other tenants on the replication link will continue to replicate. It is recommended to upgrade all clusters in the replication topology to HCP 9.3.0 before using the unbalanced directory mode feature.

Upgrades to version 9.2.1 or later will fail if any service plans exist that have SMTP enabled and use direct write to HCP S Series Nodes as the primary ingest tier. Please modify these service plans before upgrading to version 9.2.1 or later. For more information, please contact your authorized HCP service provider.

If you attempt to replicate objects that contain a labeled retention hold to a pre-version 9.1 cluster, the affected tenant (all its replicated namespaces) on the replication link will be paused, while other tenants on the link continue to replicate. Therefore, it is recommended to upgrade all clusters in the replication topology to Version 9.1 before using the labeled retention hold feature.

You can upgrade an HCP system to version 9.x only from version 8.x. You cannot downgrade HCP to an earlier version.

You must have at least 32 GB of RAM per node to use new software features introduced in HCP version 9.x. While you can upgrade an HCP system to version 9.x with a minimum of 12 GB of RAM per node and receive the patches and bug fixes associated with the upgrade, the system cannot use the new software features in the release. Inadequate RAM causes performance degradation and can negatively affect system stability. If you have less than 32 GB RAM per node and would like to upgrade to this release, contact your Hitachi Vantara account team.

HCP upgrades can occur with the system either online or offline. During an online upgrade, the system remains available to users and applications. Offline upgrades are faster than online upgrades, but the system is unavailable while the upgrade is in progress.

NoteDuring an online upgrade, data outages might occur as each node is upgraded. Whether data users are affected by an outage depends on the ingest tier Data Protection Level (DPL) setting specified in the service plan that's assigned to the applicable namespace. No data is lost during a data outage, but users may experience some data-access interruptions.

Supported limits

HCP supports the limits listed in the following tables.

Hardware support limits
HardwareSupport limit
Maximum number of general access, G Series Access Nodes80
Maximum number of HCP S Series Nodes80
Logical storage volumes
SAN-attached (SAIN) HDD systems
Logical volumeSupport limit
Maximum number of SAN logical storage volumes per storage node63
Maximum logical volume size for SAN LUNs15.999 TB
Internal storage (RAIN) HDD systems
Internal storageSupport limit
Maximum number of logical storage volumes per storage node RAIN4
Maximum logical volume size on internal drivesHDD capacity dependent
All-SSD systems (internal storage or SAN-attached)
Internal storageSupport limit
Number of SSDs per storage node12 (front-cage only)
Maximum logical volume size on internal drivesSSD capacity dependent
Maximum number of SAN logical storage volumes per storage node (when SAN is attached to system)63
Maximum logical volume size for SAN LUNs (when SAN is attached to system)15.999 TB
HCP VM systems — VMware ESXi
HCP VM systems — VMware ESXiSupport limit
Maximum number of logical storage volumes per VM storage node1 OS LUN, 59 Data LUNs
Maximum logical volume size15.999 TB
HCP VM systems — KVM
HCP VM systems — KVMSupport limit
Maximum number of logical storage volumes per VM storage node1 OS LUN

Data LUNs: Limited by the number of device slots available for LUNs in the VirtIO-blk para-virtualized storage back-end, which depends on the number of other devices configured for the guest OS that also use the VirtIO-blk back-end. In a typical HCP configuration, 17 slots are available.

Maximum logical volume size15.999 TB OS LUN
Data storage
Data storageSupport limit
Maximum active erasure coding topologies1
Maximum erasure coding topology size6 (5+1) sites
Minimum erasure coding topology size3 (2+1) sites
Maximum total erasure coding topologies5
Maximum number of objects per storage nodeStandard non-SSD disks for indexes: 800,000,000

SSD for indexes: 1,250,000,000

Maximum number of objects per HCP system64,000,000,000 (80 nodes times 800,000,000 objects per node)

If using 1.9 TB SSD drives: 100,000,000,000 (80 nodes times 1,250,000,000 objects per node)

Maximum number of directories per node if one or more namespaces are not optimized for cloud1,500,000
Maximum number of directories per node if all namespaces are optimized for cloud15,000,000
Maximum number of objects per directory

By namespace type:

  • HCP namespaces with unbalanced directory setting: no restriction
  • HCP namespaces with balanced directory setting: 30,000,000
Maximum object size by protocol
  • HTTP: About 2 TB (2,194,719,883,008 bytes)
  • Hitachi API for Amazon S3:
    • Without multipart upload: About 2 TB (2,194,719,883,008 bytes)
    • With multipart upload: 5 TB
  • WebDAV: About 2 TB (2,194,719,883,008 bytes)
  • CIFS: 100 GB
  • NFS: 100 GB
Hitachi API for Amazon S3: Minimum size for parts in a complete multipart upload request (except the last part)1 MB
Hitachi API for Amazon S3: Maximum part size for multipart upload5 GB
Hitachi API for Amazon S3: Maximum number of parts per multipart upload10,000
Maximum number of replication links20 inbound, 5 outbound
Maximum number of tenants1,000
Maximum number of namespaces10,000
Maximum number of namespaces with the CIFS or NFS protocol enabled50
User groups and accounts
User groups and accountsSupport limit
Maximum number of system-level user accounts per HCP system10,000
Maximum number of system-level group accounts per HCP system100
Maximum number of tenant-level user accounts per tenant10,000
Maximum number of tenant-level group accounts per tenant100
Maximum number of users in a username mapping file (default tenants only)1,000
Maximum number of SSO-enabled namespaces~1200 (SPN limit in Active Directory)
Custom metadata
Custom metadataSupport limit
Maximum number of annotations per individual object10
Maximum non-default annotation size with XML checking enabled1 MB
Maximum default annotation size with XML checking enabled1 GB
Maximum annotation size (both default and non-default) with XML checking disabled1 GB
Maximum number of XML elements per annotation10,000
Maximum level of nested XML elements in an annotation100
Maximum number of characters in the name of custom metadata annotation32
Maximum form size in POST object upload1,000,000 B
Maximum custom metadata size in POST object upload2 KB
Maximum number of SSO-enabled namespaces~1200 (SPN limit in Active Directory)
Access control lists
Access control listsSupport limit
Maximum size of access control entries per ACL1,000 MB
Metadata query engine
Metadata query engineSupport limit
Maximum number of content classes per tenant25
Maximum number of content properties per content class100
Maximum number of concurrent metadata query API queries per node5
NetworkSupport limit
Maximum number of user-defined networks (virtual networks) per HCP system200
Maximum downstream DNS servers32
Maximum certificates and CSR per domain10
Storage tiering
Storage tieringSupport limit
Maximum number of storage components100
Maximum number of storage pools100
Maximum number of tiers in a service plan5
MiscellaneousSupport limit
Maximum number of HTTP connections per node255
Maximum number of SMTP connections per node100
Maximum number of attachments per email for SMTP50
Maximum aggregate email attachment size for SMTP500 MB
Maximum number of access control entries in an ACL1,000
Maximum number of labeled retention holds per object100

Supported clients and platforms

The following sections list clients and platforms that are qualified for use with HCP.

Windows clients

These Microsoft® Windows® 32-bit or 64-bit clients are qualified for use with the HTTP v1.1, WebDAV, and CIFS protocols and with the Hitachi API for Amazon S3:

  • Windows 7
  • Windows 8
  • Windows 2012 R2 (Standard and Data Center editions)
  • Windows Server 2016 (Standard and Data Center editions)
  • Windows 10
  • AIX® 7.1
  • HP-UX® 11i v3 (11.31) on PA-RISC®
  • Itanium®
  • Red Hat® Enterprise Linux ES 6.10
  • Red Hat Enterprise Linux ES 7.0
NoteUsing the WebDAV protocol to mount a namespace as a Windows share can have unexpected results and is, therefore, not recommended.

Unix clients

These Unix clients are qualified for use with the HTTP v1.1, WebDAV, and NFS v3 protocols and with the Hitachi API for Amazon S3:

  • HP-UX 11i v3 (11.31) on Itanium
  • HP-UX 11i v3 (11.31) on PA-RISC
  • AIX 7.1
  • Red Hat Enterprise Linux ES 6.10 and 7.0
NoteHCP does not support NFS v4 protocol.


The table below lists the web browsers that are qualified for use with the HCP System Management, Tenant Management, and Search Consoles and the Namespace Browser. Other browsers or versions may also work.

BrowserClient Operating System
Internet Explorer® 11
NoteInternet Explorer compatibility view mode may work, but is not supported by HCP.
Mozilla Firefox®




Red Hat Enterprise Linux

Sun Solaris

Google Chrome®




Red Hat Enterprise Linux

Sun Solaris

*The Consoles and Namespace Browser work in Internet Explorer only if ActiveX is enabled. Also, the Consoles work only if the security level is not set to high.
NoteTo correctly display the System Management Console, Tenant Management Console, and Namespace Browser, the browser window must be at least 1,024 pixels wide by 768 pixels high.

Client operating systems for HCP Data Migrator

These client operating systems are qualified for use with HCP Data Migrator.

NoteHCP Data Migrator was deprecated in release 9.2.1. Support will be discontinued in a future release of HCP. In HCP 9.3.0, HCP Data Migrator has been removed from the Tenant Management Console. If you need HCP Data Migrator, contact Hitachi Vantara Support.
  • Microsoft 32-bit Windows:
    • Windows XP Professional
    • Windows 2003 R2 (Standard and Enterprise Server editions)
    • Windows 2008 R2 (Standard and Enterprise Server editions)
    • Windows 7
    • Windows 8
    • Windows 2012 (Standard and Datacenter editions)
  • HP-UX 11i v3 (11.31) on Itanium
  • HP-UX 11i v3 (11.31) on PA-RISC
  • IBM AIX 7.1
  • Red Hat Enterprise Linux ES 5 (32-bit)
  • Red Hat Enterprise Linux ES 6.10 and 7.0 (64-bit)
  • Sun Solaris 10 SPARC
  • Sun Solaris 11 SPARC
NoteThe Oracle Java Runtime Environment (JRE) version 7 update 6 or later must be installed on the client.

Platforms for HCP VM

HCP VM runs on these platforms:

  • VMware ESXi 6.5 U1 and U2
  • VMware ESXi 6.7 U1, U2, and U3
  • VMware ESXi 7.0 (qualified on hardware version 17)
  • VMware vSAN 6.6
  • VMware vSAN 6.7
  • VMware vSAN 7.0
  • KVM — qualified on CentOS 7 and Fedora Core 29. For relevant support, configuration, installation, and usage information, see Deploying an HCP-VM System on KVM (MK-94HCP009-06).

Third-party integrations

The following third-party applications have been tested and proven to work with HCP. Hitachi Vantara does not endorse any of the applications listed below, nor does Hitachi Vantara perform ongoing qualification with subsequent releases of the applications or HCP. Use these and other third-party applications at your own risk.

Hitachi API for Amazon S3 tools

These tools are qualified for use with the Hitachi API for Amazon S3:

  • CloudBerry Explorer (does not support multipart upload)
  • CloudBerry Explorer PRO (for HCP multipart upload, requires using an Amazon S3 compatible account instead of a HCP account; for CloudBerry internal chunking, requires versioning to be enabled on the target bucket)
  • S3 Curl
  • S3 Browser

Mail servers

These mail servers are qualified for use with the SMTP protocol:

  • Microsoft Exchange 2010 (64 bit)
  • Microsoft Exchange 2013
  • Microsoft Exchange 2016

NDMP backup applications

These NDMP backup applications are qualified for use with HCP:

  • Hitachi Data Protection Suite 8.0 SP4 (CommVault® Simpana® 8.0)
  • Symantec® NetBackup® 7 — To use NetBackup with an HCP system:
    • Configure NDMP to require user authentication (that is, select either the Allow username/pwd authenticated operations or Allow digest authenticated operations option in the NDMP protocol panel for the default namespace in the Tenant Management Console).
    • Configure NetBackup to send the following directive with the list of backup paths:
      set TYPE=openPGP

Windows Active Directory

HCP is compatible with Active Directory on servers running Windows Server 2012 R2 or Windows Server 2016. In either case, all domain controllers in the forest HCP uses for user authentication must minimally be at the 2012 R2 functional level.

RADIUS protocols

HCP supports the following RADIUS protocols:

  • CHAP
  • EAPMD5
  • MSCHAPv2
  • PAP

Supported hardware

The following sections list hardware that is supported for use in HCP systems.

NoteThe lists of supported hardware are subject to change without notice. For the most recent information on supported hardware, contact your HCP sales representative.

Supported servers

These servers are supported for HCP systems with internal storage:

  • HCP G11 (D52BQ-2U)
  • HCP G10 (D51B-2U)

These servers are supported for HCP SAN-attached systems with internal storage:

  • HCP G11 (D52BQ-2U)
  • HCP G10 (D51B-2U)

Server memory

At least 32 GB of RAM per node is needed to use new software features introduced in HCP 9.x. An HCP system can be upgraded to version 9.x with a minimum of 12 GB of RAM per node, and receive the patches and bug fixes that come with the upgrade, but the system cannot use the new software features. Inadequate RAM causes performance degradation and can negatively affect system stability.

If you have less than 32 GB RAM per node and would like to upgrade to HCP 9.x, contact your Hitachi Vantara account team.

Supported storage platforms

These storage platforms are supported for HCP SAIN systems:

  • Hitachi Virtual Storage Platform
  • Hitachi Virtual Storage Platform G200
  • Hitachi Virtual Storage Platform G400
  • Hitachi Virtual Storage Platform G600
  • Hitachi Virtual Storage Platform G1000
  • Hitachi Virtual Storage Platform G1500
  • Hitachi Virtual Storage Platform 5100
  • Hitachi Virtual Storage Platform 5100H
  • Hitachi Virtual Storage Platform 5200
  • Hitachi Virtual Storage Platform 5200H
  • Hitachi Virtual Storage Platform 5500
  • Hitachi Virtual Storage Platform 5500H
  • Hitachi Virtual Storage Platform 5600
  • Hitachi Virtual Storage Platform 5600H
  • Hitachi Virtual Storage Platform E590
  • Hitachi Virtual Storage Platform E790
  • Hitachi Virtual Storage Platform E990
  • Hitachi Virtual Storage Platform E1090

Supported back-end network switches

The following back-end network switches are supported in HCP systems:

  • Alaxala AX2430
  • Arista 7020SR-24C2-R
  • Cisco® Nexus® 3K- C31128PQ-10GE
  • Cisco® Nexus® 3K-C31108PC-V
  • Cisco® Nexus® 5548UP
  • Cisco® Nexus® 93180YC-FX
  • Cisco® N9K-C93180YC-FX3
  • Cisco® 5596UP
  • Dell PowerConnect 2824
  • ExtremeSwitching VDX® 6740
  • ExtremeSwitching 210
  • ExtremeSwitching 6720 - SAIN systems only
  • HP 4208VL
  • Ruckus ICX® 6430-24
  • Ruckus ICX® 6430-24P HPOE
  • Ruckus ICX® 430-48

Supported Fibre Channel switches

The following Fibre Channel switches are supported for HCP SAIN systems:

  • Brocade 5120
  • Brocade 6510
  • Cisco MDS 9134
  • Cisco MDS 9148
  • Cisco MDS 9148S

Supported Fibre Channel host bus adapters

These Fibre Channel host bus adapters (HBAs) are supported for HCP SAIN systems:

  • Emulex® LPe 32002-M2-Lightpulse

    (for supported firmware and boot BIOS versions, refer to the G11 Hardware Tool set)

  • Emulex® LPe 11002-M4

    (firmware version 2.82a4, boot BIOS 2.02a1)

  • Emulex® LPe 12002-M8

    (firmware version 1.10a5, boot BIOS 2.02a2)

  • Emulex® LPe 12002-M8 (GQ-CC-7822-Y)

    (firmware version 1.10a5, boot BIOS 2.02a2)

  • Hitachi FIVE-EX 8Gbps

    (firmware version

Issues resolved

Issues resolved in this release

The following table lists the issue resolved in HCP 9.3.6.

Reference NumberSR NumberDescription
HCP-42996HCP updated OpenSSL to 1.1.1n. For a list of CVEs resolved by this change, see CV records resolved in this release.
HCP-43029Resolved a BadDigest error-reporting issue in the logs when using S3 to delete an object containing a long dash in its name.



Resolved a node-roll issue caused by a file system becoming full due to excessive SPOCC logging.
HCP-4303203208123Resolved a CIFS connection-termination issue while copying large files on a CIFS share.
HCP-4303302874449Resolved an issue where an incorrect namespace FQDN was reported when a VLAN was used on a cluster.
HCP-43034Resolved a postgress log error reporting issue when duplicate elimination services are executed.
HCP-43048 This maintenance release resolves an issue that the database volume may remain in the “initializing” state when executing an SSD volume expansion service procedure.

CVE Records resolved in this release

HCP 9.3.6 release resolves the following CVEs, in addition to resolving several additional security weakness not associated with these CVEs:

CVE Record NumberHitachi Vantara reference numberDescription
CVE-2022-0778HCP-42996The BN_mod_sqrt() function, which computes a modular square root, contains a bug that can cause it to loop forever for non-prime moduli.
CVE-2021-3711 HCP-42996A bug in the implementation of the SM2 decryption code means that the calculation of the buffer size required to hold the plaintext returned by the first call to EVP_PKEY_decrypt() can be smaller than the actual size required by the second call. This can lead to a buffer overflow when EVP_PKEY_decrypt() is called by the application a second time with a buffer that is too small.
CVE-2021-3712HCP-42996If an application requests the printing of an ASN.1 structure that contains ASN1_STRINGs that have been directly constructed by the application without NUL terminating the "data" field, a read buffer overrun can occur.

Compatibility issues introduced in HCP 8.2 or later

The following table lists the compatibility issues introduced in HCP v8.2 or later. The issues are listed in ascending order by reference number.

Ref. numberDescriptionVersion introduced in



In HCP v8.2, the HCP software was upgraded to Jetty v9. The upgrade introduces several security enhancements that might impact some deployments:
  • HCP no longer supports SSL v1, v2, and v3 protocols.
  • HCP conforms more closely to RFC 7230, and no longer allows header folding.
HCP v8.2
HCP-33583HCP now requires that the x-amz-date header value is within 15 minutes of when HCP receives the Hitachi API for Amazon S3 request.HCP v8.2
HCP-33672HCP now validates x-amz-date headers on appropriate Hitachi API for Amazon S3 requests.HCP v8.2
HCP-35286HCP now sends the severity of the EventID/messages such as NOTICE, WARNING or ERROR to Syslog servers.HCP v8.1
HCP-37063Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported.HCP v8.2
HCP-37858Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported.HCP v9.1

Known issues

The next table lists the known issues in the current release of HCP. The issues are listed in order by reference number. Where applicable, the service request number is also shown.

Reference NumberSR NumberDescription
HCP-43284In some circumstances, an offline upgrade might fail because the HCP shutdown process cannot unmount an encrypted archive volume. If this failure occurs, consult Hitachi VantaraSupport.

An offline upgrade failure adds the following two lines to the HCP logs:

 Standard ERROR for 'dmsetup remove --force archive001-crypt':

   device-mapper: remove ioctl on archive001-crypt  failed: Device or resource busy
HCP-4117603131048, 03141801, 03171024HCP running on a G11 server can raise a false-positive alert about the power supply, CPU, or disk drives.
HCP-40505Manually started execution of a service is not persistent. It can be interrupted by the scheduled service or a node event such as a reboot.
HCP-3987602673882In a SAN-attached HCP environment, storage addition procedure may fail, indicating that the procedure fails because of a device mapper name of mpathb (or other mpath device) cannot be formatted.
HCP-3979802639142Solr does not create proper indexing when user ingests a custom metadata containing format other than "Pretty formatted XML." Therefore, annotations with a single line of XML are not parsed properly when doing phrase searches.
HCP-39465Objects cannot be deleted using the namespace browser when logged in as an anonymous user. Log in as an authenticated user to delete objects when using the namespace browser.
HCP-39045Space occupied by old object versions is not freed by the Garbage Collection service if the object is in a replicated namespace and the replication link is suspended. If feasible, delete the replication link or remove the namespace from replication to work around the issue.
HCP-38505HCP appears to send the correct error code, but is inconsistent with AWS in that the size check should occur earlier than it does. As a result, HCP sends a 400 error code rather than sending a 200 error code during the keep-alive procedure.
HCP-3840802155007ntpd tries to bind to usb0 network interface on HCP 9.x G11 system and causes time synchronization issues.

Workaround: On each node, prevent the driver from loading by denylisting in /etc/modprobe.d/aos.conf (that is, add/append the following lines in /etc/modprobe.d/aos.conf:

blacklist cdc_ether
blacklist usbnet
HCP-3815502090989Resetting advanced settings for an HCP S Series storage component does not work.
HCP-38048Service clearPolicyState does not clear rows that have no matching external_file entries.
HCP-37935While troubleshooting the progress of replication, the replication link in the overview page increases when a tenant is paused. The increase in pending data is similar to the total size of the paused namespaces.
HCP-37851Starting with release HCP 8.2, all units of systemd-tmpfiles service log errors messages in /var/log/messages on a daily periodicity. The log messages are similar to the following:

systemd-tmpfiles[29354]: [/usr/lib/tmpfiles.d/mdadm.conf:1] Line references path below legacy directory /var/run/, updating /var/run/mdadm → /run/mdadm; please update the tmpfiles.d/ drop-in file accordingly.

Initial investigation suggests that these error messages cause no functional error symptoms in HCP.

HCP-37810When provisioning rear-cage SSD to the HCP cluster on a subset of nodes in a SAN-attached G10 or G11 configuration, the service procedure tries to add rear-cage SSD on both nodes that comprise a Zero-Copy-Failover (ZCF) pair, even if one of those nodes does not have rear-cage SSD to be provisioned. This leads to error in the service procedure. As a work-around, ensure that you provision rear-cage SSDs either for both nodes that comprise a ZCF pair, or simultaneously for all nodes in the cluster.
HCP-37778After upgrade of an HCP system is completed, the System Management Console Hardware page may display Initializing status for some of the logical volumes. This is the result of the device SMART error log containing records of error. Please contact Hitachi Vantara technical support to identify the error condition and the corrective action to resolve the symptom.
HCP-37754HCP installed in an ESXi environment may display the following FSTRIM error message on the System Management Console: "Failure encountered attempting to trim volumes on nodes:", and an error with Event ID 2818 is listed in the error log under Major Events.

Please contact Hitachi Vantara customer support if you encounter this error message.

HCP-37753HCP system goes in read-only state because of node rolls due to metadata manager not starting up. System could even look unstable.

Workaround: Reboot the system.

HCP-3769601612339MQE shard / solr core balancing doesn’t function as desired for IPL=2 and causes incomplete query results.


ATR Finalize Migration fails with "No space left on device". This is for HCP500 system with boot from SAN - they are replacing one storage system with another. In order to complete migration - arc-deploy tries to copy from LUN #0 to LUN#128. On older HCP500 system the /boot partition is only 128MB in size.
HCP-37426Attempting to perform DELETE and PUTCOPY simultaneously on an object results in "Non-replicating Irreparable objects detected" error message in SMC of HCP.
HCP-37381HS3 protocol in race condition allowed both directory and file objects created with same pathname. Unlike AWS, HCP has a concept of directories. So the upper level directory cannot be also a file.
HCP-37342An unexpected duplicate row in the per-object metadata table will cause node outages until the duplicate row is removed.

HCP product installation procedure may fail with the following error message if there is a USB drive or external DVD drive connected to the system when running the installation wizard:

umount: /dev/sr0: umount failed: Invalid argument.

This may occur in both VM and appliance configurations. Please disconnect all unnecessary USB drive and external DVD drives from the system, and retry the installation procedure,

HCP-37247HCP systems running version 8.2 and later may experience network interface flapping and resetting of network adapters. This issue may be caused by a low-level defect in the kernel that causes a network interface to stop transmitting for several seconds, which leads to the interface resetting itself and self-recovering. In active-backup network interface configurations, this leads to a network interface failover within the corresponding front-end or back-end network bond. There is no noticeable impact to clients during this very short time interval.

SNMP returns the incorrect replication link name

Workaround: Use the HCP Management API to return the correct replication link name.

HCP-36744In rare circumstances, when HCP G11 operating system is installed on a node, the installation process may hang during making filesystems. This has typically been observed in SAN-attached configurations. This symptom occurs when HCP G11 detects that there appears to already be a filesystem on the volume, and the filesystem creation command is waiting for user input, but the prompt output by that command is not displayed on the console. If you are certain that the filesystem formatting procedure can continue (i.e., the volumes are mapped correctly, and all data on the volume can be destroyed), you can type in yes and press Enter, which should allow the procedure to continue.
HCP-3663201547564Multipart upload fails in the FileOpenForWriteIndex.suspendAndSwap function and returns "Attempt to suspend and swap a multipart upload file handle" error
HCP-3600101410508Node recovery during an online upgrade procedure targets a healthy node
HCP-3508901426836Zero-copy failover failback might leave behind stall mount points
HCP-3502701415199Migration finalization might timeout and require a restart



Policy state of over 1 million objects causes node reboots

In the HCP Search Console UI, the login ID changes to null and a subsequent search returns "500 Error: Internal server error"

When you open the Tenant Management Console from the System Management Console, initiate a search by logging in to the Search Console with your system-level credentials, and either refresh the page or click the search button, the following events occur:

  • You are returned to the login page.
  • The login ID changes to null.

If you log in to the Search Console again with your tenant-level credentials and initiate a search, the query returns the following error message:

500 Error: Internal server error

Workaround: Depending on the circumstances that led to this error, complete the first or both of the following steps:

  1. On the Security page of the System Management Console and Tenant Management Console, keep the Log users out if inactive for more than value the same.
  2. If you initiated a search and then refreshed the page before the results were displayed, clear cookies in your browser window. Then log in to the Search Console again with your tenant-level credentials.
HCP-3476401309564After disabling CIFS on an HCP namespace, the Windows client connection remains active, and objects are written to the root (/) file system



Overflowed, thin-provisioned block storage might cause data loss

Workaround: Do not over provision dynamic pools.

HCP-3451501312806Major capacity of the /var file system contains log downloads

When a zero-copy-failover partner node reboots after a failover, the metadata query engine does not recover

Workaround: Edit the following files:

  • In the /opt/arc/solr/solr/solr.xml file, add the shards that are on the standby volumes.
  • In the /opt/arc/solr/solr/cores file, create symlinks that point to the shards on the standby volumes.
HCP-34207Faulty SSD drives can cause a failure when adding a new SSD volume to HCP
HCP-34203Capacity calculations and UI display are inconsistent between HCP and HCP S Series Node
HCP-33980Some metadata headers are processed inconsistently between AWS S3 and HCP
HCP-33541Active/passive replication link schedule does not adjust for systems located in different time zones
HCP-32957Metadata query engine with sort option causes Apache Solr Java Virtual Machine to run out of memory

Delete old database procedure hangs.

When administering namespaces with 100,000 objects or more, the Delete Old Database procedure is known to run indefinitely and display #, even though the deletion has completed.

HCP-3255500294339Watchdog timer causes premature soft lockup panic
HCP-32486The Active Directory allowlist filter is removed when the HCP System Management Console fails to update settings.
HCP-32164Unable to change the name of an HCP S Series component in the HCP System Management Console
HCP-32018Migration hangs and produces inconsistent status information

System restart fails after changing management network configuration

The HCP system should restart each time a change is made to the management network configuration, but after enabling the management network for the first time the HCP system does not restart again from changes made to management network configuration.


Inconsistent case sensitivity for Hitachi API for Amazon S3 multipart upload query parameters

Case sensitivity is inconsistent among the query parameters used with S3 compatible API requests related to multipart uploads. For example, the uploadId query parameter used in requests to upload a part is not case sensitive, while the uploadId query parameter used in requests to list the parts of a multipart upload or complete or abort a multipart upload is case sensitive.


System restart due to unavailable node not receiving management network IP address

If a node is unavailable when the management network is enabled, the node does not receive the management network IP address. If any other change is made to the management network, the HCP system shuts down so the node can receive the management network IP address.

Workaround: Only enable the management network when all nodes are available.


Links in a geo-protection replication topology can be added to replication chain

Geo-protection replication chains are not supported. If a system in the geo-replication topology becomes unavailable, the geo-protected systems outside of the topology could experience data unavailability


Tar gzip compressed objects fail MD5 check due to Firefox browser issue

Tar gzip compressed objects downloaded from HCP through the Firefox browser fail the MD5 check.

HCP-31112Objects left in "VALID, UNREPLICATABLE_OPEN" state and cannot be cleaned up by running garbage collection

DNS failover fails due to domain name change in active/passive replication link

If a system is in a active/passive replication link and has its domain name changed, the replica system does not receive the updated domain name which causes DNS failover to fail.

Workaround: After you change the domain name for the primary system, update any setting on the tenant overview page to replicate the new domain name.


HS3 500 Internal Server Error due to double slash (//) in object name

If an object has a double slash (//) in its object name and the object is ingested using HS3, HCP returns a HTTP 500 internal server error.


Namespace browser cannot load directory due to ASCII characters in object name

The namespace browser cannot display the contents of a directory that contains an object with any of the following ASCII characters in its name: %00-%0F, %10-%1F, or %20.


AD falsely report missing SPNs due to replication topology with tenant or namespaces on custom network

In a replication topology where systems have full SSO support, HCP may incorrectly report missing SPN errors for replicating tenants and namespaces that are using a custom network with a non-default domain name.

HCP-29301Database connections exhausted

On high-load HCP systems that are balancing metadata, nodes can restart due to exceeding the database connection limit.


While the Migration service is running, the migration status occasionally shows incorrect values

Occasionally while the Migration service is running, the migration status values for the total number of bytes being migrated and the total number of objects being migrated are incorrect. This occurs regardless of how many bytes or objects are actually migrated. Once the migration completes, the migration status values become accurate.


SNMP version 2c traps sent for version 3 traps

HCP can be configured to use SNMP version 3. However, when configured this way, HCP sends version 2c traps instead of the expected version 3 traps.

Workaround: To receive traps from HCP, have your SNMP application accept SNMP version 2c traps.


Shredding in SAIN systems

In SAIN systems, HCP may not effectively execute all three passes of the shredding algorithm when shredding objects. This is due to the fact that some storage systems make extensive use of disk caching. Depending on the particular hardware configuration and the current load on the system, some of the writes from the shredding algorithm may not make it from the cache to disk.


Displaying UTF-16-encoded objects

Objects with content that uses UTF-16 character encoding may not be displayed as expected due to the limitations of some browser and operating system combinations. Regardless of the appearance on the screen, the object content HCP returns is guaranteed to be identical to the data before it was stored.

Accessing product documentation

Product user documentation is available on the Hitachi Vantara Support Website: Check this site for the most current documentation, including important updates that may have been made after the release of the product.

Getting help

The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information:

Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to, register, and complete your profile.