Skip to main content
Hitachi Vantara Knowledge

Content Platform 9.3.3 Release notes - Customer

About this document

This document contains release notes for Hitachi Content Platform 9.3.3.

NoteThe HCP v9.3.2 software was not released. As a result, the corresponding HCP v9.3.2 release notes were not published.

Release highlights

The HCP 9.3.3 release is a maintenance release that introduces support for new hardware and software, as well as resolving several issues observed in prior HCP releases.

Support for new SSDs

The HCP 9.3.3 release adds support for the following SSDs:

  • 1.92 TB front-cage disks and rear-cage disks for G10 and G11: 1HDDZ9Z0316.P 1.92TB SATA 6Gbps 1.3DWPD SFF SSD PM893
  • 3.84 TB front-cage disks for G10 and G11: 1HDDZBZ009M.P 3.84TB SATA 6Gbps 1.3DWPD SFF SSD PM893

Support for Cisco Nexus 93180YC-FX 48x10GbE (SFP+) Switch

The HCP 9.3.3 release supports the Cisco Nexus 93180YC-FX 48x10GbE (SFP+) switch. The firmware on the switch must be version nxos.9.3.8. If it is not up-to-date, see the Maintaining HCP System Hardware documentation: https://knowledge.hitachivantara.com/Documents/Storage/Content_Platform

Qualification of HCP VM with KVM on CentOS 7.X

The HCP 9.3.3 release addresses qualification of HCP VM with KVM on CentOS 7.X. The manufacturer string has to be overwritten in the VM’s configuration. For instructions on how to perform this procedure, see the Deploying an HCP VM System on KVM documentation: https://knowledge.hitachivantara.com/Documents/Storage/Content_Platform

Upgrade notes

Upgrades to HCP 9.3.3 will fail if any of the namespaces have HSwift protocol enabled. Disable HSwift protocol on the relevant namespaces before upgrade. Support for HSwift protocol has ended with HCP 9.3.0.

If you try to replicate namespaces with unbalanced directory mode enabled to a pre-HCP 9.3.0 version cluster, the affected tenant on the replication link will be paused. The other tenants on the replication link will continue to replicate. It is recommended to upgrade all clusters in the replication topology to HCP 9.3.0 before using the unbalanced directory mode feature.

Upgrades to version 9.2.1 or later will fail if any service plans exist that have SMTP enabled and use direct write to HCP S Series Nodes as the primary ingest tier. Please modify these service plans before upgrading to version 9.2.1 or later. For more information, please contact your authorized HCP service provider.

If you attempt to replicate objects that contain a labeled retention hold to a pre-version 9.1 cluster, the affected tenant (all its replicated namespaces) on the replication link will be paused, while other tenants on the link continue to replicate. Therefore, it is recommended to upgrade all clusters in the replication topology to Version 9.1 before using the labeled retention hold feature.

You can upgrade an HCP system to version 9.x only from version 8.x. You cannot downgrade HCP to an earlier version.

You must have at least 32 GB of RAM per node to use new software features introduced in HCP version 9.x. While you can upgrade an HCP system to version 9.x with a minimum of 12 GB of RAM per node and receive the patches and bug fixes associated with the upgrade, the system cannot use the new software features in the release. Inadequate RAM causes performance degradation and can negatively affect system stability. If you have less than 32 GB RAM per node and would like to upgrade to this release, contact your Hitachi Vantara account team.

HCP upgrades can occur with the system either online or offline. During an online upgrade, the system remains available to users and applications. Offline upgrades are faster than online upgrades, but the system is unavailable while the upgrade is in progress.

NoteDuring an online upgrade, data outages might occur as each node is upgraded. Whether data users are affected by an outage depends on the ingest tier Data Protection Level (DPL) setting specified in the service plan that's assigned to the applicable namespace. No data is lost during a data outage, but users may experience some data-access interruptions.

Supported limits

HCP supports the limits listed in the following tables.

Hardware
Hardware support limits
HardwareSupport limit
Maximum number of general access, G Series Access Nodes80
Maximum number of HCP S Series Nodes80
Logical storage volumes
SAN-attached (SAIN) HDD systems
Logical volumeSupport limit
Maximum number of SAN logical storage volumes per storage node63
Maximum logical volume size for SAN LUNs15.999 TB
Internal storage (RAIN) HDD systems
Internal storageSupport limit
Maximum number of logical storage volumes per storage node RAIN4
Maximum logical volume size on internal drivesHDD capacity dependent
All-SSD systems (internal storage or SAN-attached)
Internal storageSupport limit
Number of SSDs per storage node12 (front-cage only)
Maximum logical volume size on internal drivesSSD capacity dependent
Maximum number of SAN logical storage volumes per storage node (when SAN is attached to system)63
Maximum logical volume size for SAN LUNs (when SAN is attached to system)15.999 TB
HCP VM systems — VMware ESXi
HCP VM systems — VMware ESXiSupport limit
Maximum number of logical storage volumes per VM storage node1 OS LUN, 59 Data LUNs
Maximum logical volume size15.999 TB
HCP VM systems — KVM
HCP VM systems — KVMSupport limit
Maximum number of logical storage volumes per VM storage node1 OS LUN

Data LUNs: Limited by the number of device slots available for LUNs in the VirtIO-blk para-virtualized storage back-end, which depends on the number of other devices configured for the guest OS that also use the VirtIO-blk back-end. In a typical HCP configuration, 17 slots are available.

Maximum logical volume size15.999 TB OS LUN
Data storage
Data storageSupport limit
Maximum active erasure coding topologies1
Maximum erasure coding topology size6 (5+1) sites
Minimum erasure coding topology size3 (2+1) sites
Maximum total erasure coding topologies5
Maximum number of objects per storage nodeStandard non-SSD disks for indexes: 800,000,000

SSD for indexes: 1,250,000,000

Maximum number of objects per HCP system64,000,000,000 (80 nodes times 800,000,000 objects per node)

If using 1.9 TB SSD drives: 100,000,000,000 (80 nodes times 1,250,000,000 objects per node)

Maximum number of directories per node if one or more namespaces are not optimized for cloud1,500,000
Maximum number of directories per node if all namespaces are optimized for cloud15,000,000
Maximum number of objects per directory

By namespace type:

  • HCP namespaces with unbalanced directory setting: no restriction
  • HCP namespaces with balanced directory setting: 30,000,000
Maximum object size by protocol
  • HTTP: About 2 TB (2,194,719,883,008 bytes)
  • Hitachi API for Amazon S3:
    • Without multipart upload: About 2 TB (2,194,719,883,008 bytes)
    • With multipart upload: 5 TB
  • WebDAV: About 2 TB (2,194,719,883,008 bytes)
  • CIFS: 100 GB
  • NFS: 100 GB
Hitachi API for Amazon S3: Minimum size for parts in a complete multipart upload request (except the last part)1 MB
Hitachi API for Amazon S3: Maximum part size for multipart upload5 GB
Hitachi API for Amazon S3: Maximum number of parts per multipart upload10,000
Maximum number of replication links20 inbound, 5 outbound
Maximum number of tenants1,000
Maximum number of namespaces10,000
Maximum number of namespaces with the CIFS or NFS protocol enabled50
User groups and accounts
User groups and accountsSupport limit
Maximum number of system-level user accounts per HCP system10,000
Maximum number of system-level group accounts per HCP system100
Maximum number of tenant-level user accounts per tenant10,000
Maximum number of tenant-level group accounts per tenant100
Maximum number of users in a username mapping file (default tenants only)1,000
Maximum number of SSO-enabled namespaces~1200 (SPN limit in Active Directory)
Custom metadata
Custom metadataSupport limit
Maximum number of annotations per individual object10
Maximum non-default annotation size with XML checking enabled1 MB
Maximum default annotation size with XML checking enabled1 GB
Maximum annotation size (both default and non-default) with XML checking disabled1 GB
Maximum number of XML elements per annotation10,000
Maximum level of nested XML elements in an annotation100
Maximum number of characters in the name of custom metadata annotation32
Maximum form size in POST object upload1,000,000 B
Maximum custom metadata size in POST object upload2 KB
Maximum number of SSO-enabled namespaces~1200 (SPN limit in Active Directory)
Access control lists
Access control listsSupport limit
Maximum size of access control entries per ACL1,000 MB
Metadata query engine
Metadata query engineSupport limit
Maximum number of content classes per tenant25
Maximum number of content properties per content class100
Maximum number of concurrent metadata query API queries per node5
Network
NetworkSupport limit
Maximum number of user-defined networks (virtual networks) per HCP system200
Maximum downstream DNS servers32
Maximum certificates and CSR per domain10
Storage tiering
Storage tieringSupport limit
Maximum number of storage components100
Maximum number of storage pools100
Maximum number of tiers in a service plan5
Miscellaneous
MiscellaneousSupport limit
Maximum number of HTTP connections per node255
Maximum number of SMTP connections per node100
Maximum number of attachments per email for SMTP50
Maximum aggregate email attachment size for SMTP500 MB
Maximum number of access control entries in an ACL1,000
Maximum number of labeled retention holds per object100

Supported clients and platforms

The following sections list clients and platforms that are qualified for use with HCP.

Windows clients

These Microsoft® Windows® 32-bit or 64-bit clients are qualified for use with the HTTP v1.1, WebDAV, and CIFS protocols and with the Hitachi API for Amazon S3:

  • Windows 7
  • Windows 8
  • Windows 2012 R2 (Standard and Data Center editions)
  • Windows Server 2016 (Standard and Data Center editions)
  • Windows 10
  • AIX® 7.1
  • HP-UX® 11i v3 (11.31) on PA-RISC®
  • Itanium®
  • Red Hat® Enterprise Linux ES 6.10
  • Red Hat Enterprise Linux ES 7.0
NoteUsing the WebDAV protocol to mount a namespace as a Windows share can have unexpected results and is, therefore, not recommended.

Unix clients

These Unix clients are qualified for use with the HTTP v1.1, WebDAV, and NFS v3 protocols and with the Hitachi API for Amazon S3:

  • HP-UX 11i v3 (11.31) on Itanium
  • HP-UX 11i v3 (11.31) on PA-RISC
  • AIX 7.1
  • Red Hat Enterprise Linux ES 6.10 and 7.0
NoteHCP does not support NFS v4 protocol.

Browsers

The table below lists the web browsers that are qualified for use with the HCP System Management, Tenant Management, and Search Consoles and the Namespace Browser. Other browsers or versions may also work.

BrowserClient Operating System
Internet Explorer® 11
NoteInternet Explorer compatibility view mode may work, but is not supported by HCP.
Windows
Mozilla Firefox®

Windows

HP-UX

IBM AIX

Red Hat Enterprise Linux

Sun Solaris

Google Chrome®

Windows

HP-UX

IBM AIX

Red Hat Enterprise Linux

Sun Solaris

*The Consoles and Namespace Browser work in Internet Explorer only if ActiveX is enabled. Also, the Consoles work only if the security level is not set to high.
NoteTo correctly display the System Management Console, Tenant Management Console, and Namespace Browser, the browser window must be at least 1,024 pixels wide by 768 pixels high.

Client operating systems for HCP Data Migrator

These client operating systems are qualified for use with HCP Data Migrator.

NoteHCP Data Migrator was deprecated in release 9.2.1. Support will be discontinued in a future release of HCP. In HCP 9.3.0, HCP Data Migrator has been removed from the Tenant Management Console. If you need HCP Data Migrator, contact Hitachi Vantara Support.
  • Microsoft 32-bit Windows:
    • Windows XP Professional
    • Windows 2003 R2 (Standard and Enterprise Server editions)
    • Windows 2008 R2 (Standard and Enterprise Server editions)
    • Windows 7
    • Windows 8
    • Windows 2012 (Standard and Datacenter editions)
  • HP-UX 11i v3 (11.31) on Itanium
  • HP-UX 11i v3 (11.31) on PA-RISC
  • IBM AIX 7.1
  • Red Hat Enterprise Linux ES 5 (32-bit)
  • Red Hat Enterprise Linux ES 6.10 and 7.0 (64-bit)
  • Sun Solaris 10 SPARC
  • Sun Solaris 11 SPARC
NoteThe Oracle Java Runtime Environment (JRE) version 7 update 6 or later must be installed on the client.

Platforms for HCP VM

HCP VM runs on these platforms:

  • VMware ESXi 6.5 U1 and U2
  • VMware ESXi 6.7 U1, U2, and U3
  • VMware ESXi 7.0 (qualified on hardware version 17)
  • VMware vSAN 6.6
  • VMware vSAN 6.7
  • VMware vSAN 7.0
  • KVM — qualified on CentOS 7 and Fedora Core 29. For relevant support, configuration, installation, and usage information, see Deploying an HCP-VM System on KVM (MK-94HCP009-06).

Third-party integrations

The following third-party applications have been tested and proven to work with HCP. Hitachi Vantara does not endorse any of the applications listed below, nor does Hitachi Vantara perform ongoing qualification with subsequent releases of the applications or HCP. Use these and other third-party applications at your own risk.

Hitachi API for Amazon S3 tools

These tools are qualified for use with the Hitachi API for Amazon S3:

  • CloudBerry Explorer (does not support multipart upload)
  • CloudBerry Explorer PRO (for HCP multipart upload, requires using an Amazon S3 compatible account instead of a HCP account; for CloudBerry internal chunking, requires versioning to be enabled on the target bucket)
  • S3 Curl
  • S3 Browser

Mail servers

These mail servers are qualified for use with the SMTP protocol:

  • Microsoft Exchange 2010 (64 bit)
  • Microsoft Exchange 2013
  • Microsoft Exchange 2016

NDMP backup applications

These NDMP backup applications are qualified for use with HCP:

  • Hitachi Data Protection Suite 8.0 SP4 (CommVault® Simpana® 8.0)
  • Symantec® NetBackup® 7 — To use NetBackup with an HCP system:
    • Configure NDMP to require user authentication (that is, select either the Allow username/pwd authenticated operations or Allow digest authenticated operations option in the NDMP protocol panel for the default namespace in the Tenant Management Console).
    • Configure NetBackup to send the following directive with the list of backup paths:
      set TYPE=openPGP

Windows Active Directory

HCP is compatible with Active Directory on servers running Windows Server 2012 R2 or Windows Server 2016. In either case, all domain controllers in the forest HCP uses for user authentication must minimally be at the 2012 R2 functional level.

RADIUS protocols

HCP supports the following RADIUS protocols:

  • CHAP
  • EAPMD5
  • MSCHAPv2
  • PAP

Supported hardware

The following sections list hardware that is supported for use in HCP systems.

NoteThe lists of supported hardware are subject to change without notice. For the most recent information on supported hardware, contact your HCP sales representative.

Supported servers

These servers are supported for HCP systems with internal storage:

  • HCP G11 (D52BQ-2U)
  • HCP G10 (D51B-2U)

These servers are supported for HCP SAN-attached systems with internal storage:

  • HCP G11 (D52BQ-2U)
  • HCP G10 (D51B-2U)

Server memory

At least 32 GB of RAM per node is needed to use new software features introduced in HCP 9.x. An HCP system can be upgraded to version 9.x with a minimum of 12 GB of RAM per node, and receive the patches and bug fixes that come with the upgrade, but the system cannot use the new software features. Inadequate RAM causes performance degradation and can negatively affect system stability.

If you have less than 32 GB RAM per node and would like to upgrade to HCP 9.x, contact your Hitachi Vantara account team.

Supported storage platforms

These storage platforms are supported for HCP SAIN systems:

  • Hitachi Virtual Storage Platform
  • Hitachi Virtual Storage Platform G200
  • Hitachi Virtual Storage Platform G400
  • Hitachi Virtual Storage Platform G600
  • Hitachi Virtual Storage Platform G1000
  • Hitachi Virtual Storage Platform G1500
  • Hitachi Virtual Storage Platform E990
  • Hitachi Virtual Storage Platform 5000 series

Supported back-end network switches

The following back-end network switches are supported in HCP systems:

  • Alaxala AX2430
  • Arista 7020SR-24C2-R
  • Cisco® Nexus® 3K- C31128PQ-10GE
  • Cisco® Nexus® 3K-C31108PC-V
  • Cisco® Nexus® 5548UP
  • Cisco® 5596UP
  • Dell PowerConnect 2824
  • ExtremeSwitching VDX® 6740
  • ExtremeSwitching 210
  • ExtremeSwitching 6720 - SAIN systems only
  • HP 4208VL
  • Ruckus ICX® 6430-24
  • Ruckus ICX® 6430-24P HPOE
  • Ruckus ICX® 430-48

Supported Fibre Channel switches

The following Fibre Channel switches are supported for HCP SAIN systems:

  • Brocade 5120
  • Brocade 6510
  • Cisco MDS 9134
  • Cisco MDS 9148
  • Cisco MDS 9148S

Supported Fibre Channel host bus adapters

These Fibre Channel host bus adapters (HBAs) are supported for HCP SAIN systems:

  • Emulex® LPe 32002-M2-Lightpulse

    (for supported firmware and boot BIOS versions, refer to the G11 Hardware Tool set)

  • Emulex® LPe 11002-M4

    (firmware version 2.82a4, boot BIOS 2.02a1)

  • Emulex® LPe 12002-M8

    (firmware version 1.10a5, boot BIOS 2.02a2)

  • Emulex® LPe 12002-M8 (GQ-CC-7822-Y)

    (firmware version 1.10a5, boot BIOS 2.02a2)

  • Hitachi FIVE-EX 8Gbps

    (firmware version 10.00.05.04)

Issues resolved

Issues resolved in this release

The following table lists the issues resolved in HCP 9.3.3.

Reference NumberSR NumberDescription
HCP-4142102551671When a standby device loses both fibre paths in a SAN-based HCP cluster with internal storage, the SNMP daemon no longer falsely detects failure in internal archive volumes archive92 and archive93.
HCP-4192703086824An aborted retry of a PUT request over S3 gateway no longer fails.
HCP-4192803045030When performing a ListObjectsV2 operation over an S3 gateway, the response is now returned correctly as expected:

<EncodingType>url</EncodingType>

Instead the response is returned with a hyphen (-) between Encoding and Type:

<Encoding-Type>url</Encoding-Type>

HCP-4193003045046HCP now returns a valid XML response if the object name contains illegal XML characters.
HCP-4198802927974An HCP node will no longer reboot if one of the fibre links is not stable.
HCP-4199103091182, 03109035A versioned object delete record will no longer stall replication progress.
HCP-4202103209372For CVE-2021-44228 (Apache Log4j2 issue), this issue is resolved.
HCP-4209502538487Additional setting to improve the stability of the backend network.

CVE Records resolved in this release

HCP 9.3.3 release resolves the following CVE, in addition to resolving several additional security weakness not associated with this CVE:

CVE Record NumberHitachi Vantara reference numberDescription
CVE-2021-44228HCP-42021A vulnerability in the Apache Log4j logging framework in versions earlier than 2.15.0 enables unauthenticated remote attackers who can control log messages or log message parameters to execute malicious code on systems that perform logging functions. If message lookup substitution is enabled on the Log4j library, an attacker can replace certain strings in log messages with arbitrary code strings sent from an LDAP server. When a message containing the replacement string is logged, the arbitrary code runs, potentially enabling the attacker to take control of the system that is performing the logging.

For more information about this vulnerability, see Official Mitre entry for CVE-2021-44228.

CVE-2021-44832HCP-42021Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack where an attacker with permission to modify the logging configuration file can construct a malicious configuration using a JDBC Appender with a data source referencing a JNDI URI, which can execute remote code. This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.

For more information about this vulnerability, see Official Mitre entry for CVE-2021-44832.

CVE-2021-45046HCP-42021

It was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data using a JNDI Lookup pattern, resulting in an information leak and remote code execution in some environments and local code execution in all environments; remote code execution has been demonstrated on MacOS, Fedora, Arch Linux, and Alpine Linux.

For more information about this vulnerability, see Official Mitre entry for CVE-2021-45046.
CVE-2021-45105HCP-42021Apache Log4j2 versions 2.0-alpha1 through 2.16.0, excluding 2.12.3, did not protect from uncontrolled recursion from self-referential lookups. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data that contains a recursive lookup, resulting in a StackOverflowError that will terminate the process. This is also known as a Denial of Service (DoS) attack.

For more information about this vulnerability, see Official Mitre entry for CVE-2021-45105.

Compatibility issues introduced in HCP 8.2 or later

The following table lists the compatibility issues introduced in HCP v8.2 or later. The issues are listed in ascending order by reference number.

Ref. numberDescriptionVersion introduced in

HCP-33074

HCP-35329

In HCP v8.2, the HCP software was upgraded to Jetty v9. The upgrade introduces several security enhancements that might impact some deployments:
  • HCP no longer supports SSL v1, v2, and v3 protocols.
  • HCP conforms more closely to RFC 7230, and no longer allows header folding.
HCP v8.2
HCP-33583HCP now requires that the x-amz-date header value is within 15 minutes of when HCP receives the Hitachi API for Amazon S3 request.HCP v8.2
HCP-33672HCP now validates x-amz-date headers on appropriate Hitachi API for Amazon S3 requests.HCP v8.2
HCP-35286HCP now sends the severity of the EventID/messages such as NOTICE, WARNING or ERROR to Syslog servers.HCP v8.1
HCP-37063Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported.HCP v8.2
HCP-37858Use case of a namespace, with SMTP enabled directly writing to HCP S Series Node, is no longer supported.HCP v9.1

Known issues

The next table lists known issues in the current release of HCP. The issues are listed in order by reference number. Where applicable, the service request number is also shown.

Reference NumberSR NumberDescription
HCP-4117603131048, 03141801, 03171024HCP running on a G11 server can raise a false-positive alert about the power supply.
HCP-4169003059516An extensive security scan of HCP with a tool such as Qualys can trigger a node reboot.
HCP-41787When HCP is connected to a Hitachi Virtual Storage Platform E series array (for example, a VSP E790) in a SAN-attached configuration, HCP does not correctly recognize the array, and therefore HCP cannot be installed as a SAN-attached configuration.
HCP-4214903208123In certain environments, copying a large file to a SMB/CIFS share may produce connection reset errors, causing the upload to HCP to fail with an error message.

Accessing product documentation

Product user documentation is available on the Hitachi Vantara Support Website: https://knowledge.hitachivantara.com/Documents. Check this site for the most current documentation, including important updates that may have been made after the release of the product.

Getting help

The Hitachi Vantara Support Website is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to the Hitachi Vantara Support Website for contact information: https://support.hitachivantara.com/en_us/contact-us.html.

Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.