Skip to main content
Hitachi Vantara Knowledge

Content Platform v9.0.0 Release Notes - Customer

About this document

This document contains release notes for Hitachi Content Platform v9.0. It describes new features, specifications, upgrade notes, and resolved and known issues.

Release highlights for v9.0

HCP v9.0 includes several new features, described next.

The release also fixes issues found in previous releases. For more information, see Issues resolved in this release.

HCP G11 platform support

HCP v9.0 introduces support for the next generation of HCP server platforms, the HCP G11. The HCP G11 server is the successor to the HCP G10.

The HCP G11 provides feature parity with the HCP G10, supporting SAN-attached, internal-disk, and all-flash (with or without SAN-attached) storage configurations.

The HCP G11 is based on the QuantaGrid™ D52BQ-2U model, offering twelve 3.5” drive slots in the front cage, and two 2.5” drive slots in the rear cage for acceleration in HDD internal-disk configurations. Unlike the HCP G10, the HCP G11 also features hardware RAID in the rear cage. The chassis design enables more efficient hardware servicing.

The HCP G11 networking options match those of the HCP G10. HCP continues to offer 1 GB and 10 GB configurations, with Base-T and SFP+, in similar configurations to the HCP G10.

Additionally, the internal components of the HCP G11 server have been refreshed:

  • Intel® Xeon® Silver 4210 (Cascade Lake) CPU, DDR4 memory in four distinct memory size options (64 GB/node, 256 GB/node, 384 GB/node and a 768 GB/node all-flash node).
  • QCT QS-3516B RAID controller, offering RAID hardware for both the front- and the rear-cage drives.
  • The HBA card has been upgraded to a Gen6 LPe32002, dual-port 32 GB FC card, offering port-level MPIO capabilities.
  • Support for a larger SSD drive option in all-flash configurations, a 3.84 TB SSD drive, which brings the raw internal drive capacity to 46 TB in a RAID6 (10+2) configuration.

HCP v9.0 continues to support the same generations of servers as HCP 8.x, including HCP G10 generation systems, as well as Hitachi Compute Rack CR210H (HCP 500 and HCP 500XL architectures) and Hitachi Compute Rack CR220S (HCP 300 architecture) servers, up to the end-of-support life of these server generations. HCP G11 servers can be used as an Autonomic Technology Refresh (ATR) option to clusters that include Hitachi CR210H, Hitachi CR220S, or HCP G10 servers.

HCP v9.0 offers feature parity with HCP 8.x when deployed in VMware ESXi and Linux KVM based systems.

S Series Balancing service

HCP v9.0 introduces support for balancing of HCP S Series Nodes through the S Series Balancing service.

The S Series Balancing service is a configurable service that balances object data across S Series Nodes in the same storage pool. The service ensures that the percent of space used across S Series Nodes in a pool remains roughly equivalent. The service is particularly useful when one or more S Series Nodes in the same storage pool is added to, removed from, or retired from an HCP system.

The S Series Balancing service is included in the HCP Default Schedule. However, it remains idle until storage pools are configured to take advantage of the service.

To ensure optimal system performance in deployments with S Series storage pools that include multiple S Series Nodes, you need to add the service to your active service schedule.

Related documents

The following documents contain additional information about Hitachi Content Platform:

  • HCP System Management Help

    This Help system is a comprehensive guide to administering and using an HCP system. The Help contains complete instructions for configuring, managing, and maintaining HCP system-level and tenant-level features and functionality. The Help also describes the properties of objects stored in HCP namespaces and explains how to access those objects.

  • HCP Tenant Management Help

    This Help system contains complete instructions for configuring, managing, and maintaining HCP namespaces. The Help also describes the properties of objects stored in HCP namespaces and explains how to access those objects.

  • Managing the Default Tenant and Namespace

    This book contains complete information for managing the default tenant and namespace in an HCP system. The book provides instructions for changing tenant and namespace settings, configuring the protocols that allow access to the namespace, managing search and indexing, and downloading the installation files for HCP Data Migrator. The book also explains how to work with retention classes and the privileged delete functionality.

  • Using the Default Namespace

    This book describes the file system HCP uses to present the contents of the default namespace. This book provides instructions for using HCP-supported protocols to store, retrieve, and deleting objects, as well as changing object metadata such as retention and shred settings.

  • Using HCP Data Migrator

    This book contains the information you need to install and use HCP Data Migrator (HCP-DM), a utility that works with HCP. This utility enables you to copy data between local file systems, namespaces in HCP, and earlier HCAP archives. It also supports bulk delete operations and bulk operations to change object metadata. Additionally, it supports associating custom metadata and ACLs with individual objects. The book describes both the interactive window-based interface and the set of command-line tools included in HCP-DM.

  • Installing an HCP System

    This book provides the information you need to install the software for a new HCP system. It explains what you need to know to successfully configure the system and contains step-bystep instructions for the installation procedure.

  • Deploying an HCP-VM System on ESXi

    This book contains all the information you need to install and configure an HCP-VM system. The book also includes requirements and guidelines for configuring the VMWare® environment in which the system is installed.

  • Deploying an HCP-VM System on KVM

    This book contains all the information you need to install and configure an HCP-VM system. The book also includes requirements and guidelines for configuring the KVM environment in which the system is installed.

  • Installing an HCP SAIN System - Final On-site Setup

    This book contains instructions for deploying an assembled and configured single-rack HCP SAIN system at a customer site. It explains how to make the necessary physical connections and reconfigure the system for the customer computing environment. It also contains instructions for configuring Hitachi Remote Ops to monitor the nodes in an HCP system.

  • Installing an HCP RAIN System - Final On-site Setup

    This book contains instructions for deploying an assembled and configured single-rack HCP RAIN system at a customer site. It explains how to make the necessary physical connections and reconfigure the system for the customer computing environment. It also contains instructions for configuring Hitachi Remote Ops to monitor the nodes in an HCP system.

Upgrade notes

You can upgrade an HCP system to version 9.x only from version 8.x. You cannot downgrade HCP to an earlier version.

You must have at least 32 GB of RAM per node to use new software features introduced in HCP version 9.x. While you can upgrade an HCP system to version 9.x with a minimum of 12 GB of RAM per node and receive the patches and bug fixes associated with the upgrade, the system cannot use the new software features in the release. Inadequate RAM causes performance degradation and can negatively affect system stability. If you have less than 32 GB RAM per node and would like to upgrade to this release, contact your Hitachi Vantara account team.

When upgrading the HCP software, ensure that the new version is compatible with the currently installed version of the Appliance operating system. If it isn’t, upgrade the OS at the same time.

HCP upgrades can occur with the system either online or offline. During an online upgrade, the system remains available to users and applications. Offline upgrades are faster than online upgrades, but the system is unavailable while the upgrade is in progress. Work with the customer to determine which type of upgrade is better for them.

NoteDuring an online upgrade, data outages may occur as each node is upgraded. Whether data users are affected by an outage depends on the ingest tier DPL setting specified in the service plan that's assigned to the applicable namespace. No data is lost during a data outage, but users may experience some interruptions to data access.

Supported limits

HCP supports the limits listed in the following lists.

Hardware
  • Maximum number of general access, G Series Nodes

    80

  • Maximum number of HCP S Series Nodes

    80

Logical storage volumes
  • SAN-attached (SAIN) HDD systems

    • Maximum number of SAN logical storage volumes per storage node

      63

    • Maximum logical volume size for SAN LUNs

      15.999 TB

  • Internal storage (RAIN) HDD systems

    • Maximum number of logical storage volumes per storage node RAIN

      4

    • Maximum logical volume size on internal drives

      HDD capacity dependent

  • All-SSD systems (internal storage or SAN-attached)

    • Number of SSDs per storage node

      12 (front-cage only)

    • Maximum logical volume size on internal drives

      SSD capacity dependent

    • Maximum number of SAN logical storage volumes per storage node (when SAN is attached to system)

      63

    • Maximum logical volume size for SAN LUNs (when SAN is attached to system)

      15.999 TB

  • HCP-VM systems

    • Maximum number of logical volumes per VM storage node

      1 OS LUN, 59 Data LUNs

    • Maximum logical volume size for SAN LUNs (when SAN is attached to system)

      15.999 TB

Data storage
  • Maximum active erasure coding topologies

    1

  • Maximum erasure coding topology size

    6 (5+1) sites

  • Minimum erasure coding topology size

    3 (2+1) sites

  • Maximum total erasure coding topologies

    5

  • Maximum number of objects per storage node

    Standard non-SSD disks for indexes: 800,000,000

    SSD for indexes: 1,250,000,000

  • Maximum number of objects per HCP system

    64,000,000,000 (80 nodes times 800,000,000 objects per node)

    If using 1.9 TB SSD drives: 100,000,000,000 (80 nodes times 1,250,000,000 objects per node)

  • Maximum number of directories per node if one or more namespaces are not optimized for cloud

    1,500,000

  • Maximum number of directories per node if all namespaces are optimized for cloud

    15,000,000

  • Maximum number of objects per directory

    30,000,000

  • Maximum object size

    By protocol:

    • HTTP: About 2 TB (2,194,719,883,008 bytes)
    • Hitachi API for Amazon S3:
      • Without multipart upload: About 2 TB (2,194,719,883,008 bytes)
      • With multipart upload: 5 TB
    • HSwift: About 2 TB (2,194,719,883,008 bytes)
    • WebDAV: About 2 TB (2,194,719,883,008 bytes)
    • CIFS: 100 GB
    • NFS: 100 GB
  • Hitachi API for Amazon S3: Minimum size for parts in a complete multipart upload request (except the last part)

    1 MB

  • Hitachi API for Amazon S3: Maximum part size for multipart upload

    5 GB

  • Hitachi API for Amazon S3: Maximum number of parts per multipart upload

    10,000

  • Maximum number of replication links

    20 inbound, 5 outbound

  • Maximum number of tenants

    1,000

  • Maximum number of namespaces

    10,000

  • Maximum number of namespaces with the CIFS or NFS protocol enabled

    50

  • Maximum number of attachments per email for SMTP

    50

  • Maximum aggregate email attachment size for SMTP

    500 MB

  • Maximum number of SMTP connections per node

    100

User and group accounts
  • Maximum number of system-level user accounts per HCP system

    10,000

  • Maximum number of system-level group accounts per HCP system

    100

  • Maximum number of tenant-level user accounts per tenant

    10,000

  • Maximum number of tenant-level group accounts per tenant

    100

  • Maximum number of users in a username mapping file (default tenants only)

    1,000

  • Maximum number of SSO-enabled namespaces

    ~1200 (SPN limit in Active Directory)

Custom metadata
  • Maximum number of annotations per individual object

    10

  • Maximum non-default annotation size with XML checking enabled

    1 MB

  • Maximum default annotation size with XML checking enabled

    1 GB

  • Maximum annotation size (both default and non-default) with XML checking disabled

    1 GB

  • Maximum number of XML elements per annotation

    10,000

  • Maximum level of nested XML elements in an annotation

    100

  • Maximum number of characters in the name of custom metadata annotation

    32

  • Maximum form size in POST object upload

    1,000,000 B

  • Maximum custom metadata size in POST object upload

    2 KB

  • Maximum number of SSO-enabled namespaces

    ~1200 (SPN limit in Active Directory)

Access control lists
  • Maximum size of access control entries per ACL

    1,000 MB

Metadata query engine
  • Maximum number of content classes per tenant

    25

  • Maximum number of content properties per content class

    100

  • Maximum number of concurrent metadata query API queries per node

    5

Network
  • Maximum number of user-defined networks (virtual networks) per HCP system

    200

  • Maximum downstream DNS servers

    32

  • Maximum certificates and CSR per domain

    10

Storage tiering
  • Maximum number of storage components

    100

  • Maximum number of storage pools

    100

  • Maximum number of tiers in a service plan

    5

Miscellaneous
  • Maximum number of HTTP connections per node

    255

  • Maximum number of SMTP connections per node

    100

  • Maximum number of attachments per email for SMTP

    50

  • Maximum aggregate email attachment size for SMTP

    50 MB

  • Maximum number of access control entries in an ACL

    1,000

Supported clients and platforms

The following sections list clients and platforms that are qualified for use with HCP.

Windows clients

These Microsoft® Windows 32-bit or 64-bit clients are qualified for use with the HTTP v1.1, WebDAV, and CIFS protocols and with the Hitachi API for Amazon S3:

  • Windows 7
  • Windows 8
  • Windows 2012 R2 (Standard and Data Center editions)
  • Windows Server 2016 (Standard and Data Center editions)
  • Windows 10
  • AIX 7.1
  • HP-UX 11i v3 (11.31) PA-RISC
  • Itanium
  • RHEL ES 6.10
  • RHEL ES 7.0
NoteUsing the WebDAV protocol to mount a namespace as a Windows share can have unexpected results and is, therefore, not recommended.

Unix clients

These Unix clients are qualified for use with the HTTP v1.1, WebDAV, and NFS v3 protocols and with the Hitachi API for Amazon S3:

  • HP-UX® 11i v3 (11.31) on Itanium®
  • HP-UX 11i v3 (11.31) on PA-RISC®
  • IBM AIX 7.1
  • Red Hat® Enterprise Linux ES 6.10 and 7.0
NoteHCP does not support NFS v4 protocol.

Browsers

The table below lists the web browsers that are qualified for use with the HCP System Management, Tenant Management, and Search Consoles and the Namespace Browser. Other browsers or versions may also work.

BrowserClient Operating System
Internet Explorer® 11 *Windows
Mozilla Firefox®

Windows

HP-UX

IBM AIX

Red Hat Enterprise Linux

Sun Solaris

Google Chrome®

Windows

HP-UX

IBM AIX

Red Hat Enterprise Linux

Sun Solaris

*The Consoles and Namespace Browser work in Internet Explorer only if ActiveX is enabled. Also, the Consoles work only if the security level is not set to high.
NoteTo correctly display the System Management Console and Tenant Management Console and the Namespace Browser, the browser window must be at least 1,024 pixels wide by 768 pixels high.
NoteInternet Explorer compatibility view mode may work but is not supported by HCP.

Client operating systems for HCP Data Migrator

These client operating systems are qualified for use with HCP Data Migrator:

  • Microsoft 32-bit Windows:
    • Windows XP Professional
    • Windows 2003 R2 (Standard and Enterprise Server editions)
    • Windows 2008 R2 (Standard and Enterprise Server editions)
    • Windows 7
    • Windows 8
    • Windows 2012 (Standard and Datacenter editions)
  • HP-UX 11i v3 (11.31) on Itanium
  • HP-UX 11i v3 (11.31) on PA-RISC
  • IBM AIX 7.1
  • Red Hat Enterprise Linux ES 5 (32-bit)
  • Red Hat Enterprise Linux ES 6.10 and 7.0 (64-bit)
  • Sun Solaris 10 SPARC
  • Sun Solaris 11 SPARC
NoteThe Oracle Java Runtime Environment (JRE) version 7 update 6 or later must be installed on the client.

Platforms for HCP-VM

HCP-VM runs on these platforms:

  • VMware ESXi 6.5 U1 and U2
  • VMware ESXi 6.7 U1 and U2
  • VMware vSAN 6.6
  • VMware vSAN 6.7
  • KVM — qualified on Fedora 29 Core

Third-party integrations

The following third party applications have been tested and proven to work with HCP. Hitachi Vantara does not endorse any of the applications listed below, nor does Hitachi Vantara perform ongoing qualification with subsequent releases of the applications or HCP. Use these and other third party applications at your own risk.

Hitachi API for Amazon S3 tools

These tools are qualified for use with the Hitachi API for Amazon S3:

  • CloudBerry Explorer (does not support multipart upload)
  • CloudBerry Explorer PRO (for HCP multipart upload, requires using an Amazon S3 compatible account instead of an Hitachi account; for CloudBerry internal chunking, requires versioning to be enabled on the target bucket)
  • Cyberduck
  • DragonDisk
  • s3cmd
  • s3curl
  • s3fs-c (works only with versioning enabled on the target bucket)

Mail servers

These mail servers are qualified for use with the SMTP protocol:

  • Microsoft Exchange 2010 (64 bit)
  • Microsoft Exchange 2013
  • Microsoft Exchange 2016

NDMP backup applications

These NDMP backup applications are qualified for use with HCP:

  • Hitachi Data Protection Suite 8.0 SP4 (CommVault® Simpana® 8.0)
  • Symantec® NetBackup® 7 — To use NetBackup with an HCP system:
    • Configure NDMP to require user authentication (that is, select either the Allow username/pwd authenticated operations or Allow digest authenticated operations option in the NDMP protocol panel for the default namespace in the Tenant Management Console).
    • Configure NetBackup to send the following directive with the list of backup paths:
      set TYPE=openPGP

Windows Active Directory

HCP is compatible with Active Directory on servers running Windows Server 2012 R2 or Windows Server 2016. In either case, all domain controllers in the forest HCP uses for user authentication must minimally be at the 2012 R2 functional level.

RADIUS protocols

HCP supports the following RADIUS protocols:

  • CHAP
  • EAPMD5
  • MSCHAPv2
  • PAP

Supported hardware

The following sections list hardware that is supported for use in HCP systems.

NoteThe lists of supported hardware are subject to change without notice. For the most recent information on supported hardware, contact your HCP sales representative.

Supported servers

These servers are supported for HCP systems with internal storage:

  • HCP G11 (D52BQ-2U)
  • HCP G10 (D51B-2U)
  • Hitachi CR220S

This server is supported for HCP SAN-attached systems without internal storage:

  • Hitachi CR210H

These servers are supported for HCP SAN-attached systems with internal storage:

  • HCP G11 (D52BQ-2U)
  • HCP G10 (D51B-2U)
  • Hitachi CR220S (with 1 GB Ethernet)
  • Hitachi CR210H (with 10 GB Ethernet)

Server memory

At least 32 GB of RAM per node is needed to use new software features introduced in HCP 9.x. An HCP system can be upgraded to version 9.x with a minimum of 12 GB of RAM per node, and receive the patches and bug fixes that come with the upgrade, but the system cannot use the new software features. Inadequate RAM causes performance degradation and can negatively affect system stability.

If you have less than 32 GB RAM per node and would like to upgrade to HCP 9.x, contact your Hitachi Vantara account team.

Supported storage platforms

These storage platforms are supported for HCP SAIN systems:

  • Hitachi Advanced Server 2100
  • Hitachi Advanced Server 2300
  • Hitachi Advanced Server 2500
  • Hitachi Unified Storage 110
  • Hitachi Unified Storage 130
  • Hitachi Unified Storage 150
  • Hitachi Unified Storage VM
  • Hitachi Unified Storage T3
  • Hitachi Virtual Storage Platform
  • Hitachi Virtual Storage Platform G200
  • Hitachi Virtual Storage Platform G400
  • Hitachi Virtual Storage Platform G600
  • Hitachi Virtual Storage Platform G1000
  • Hitachi Virtual Storage Platform G1500

Supported back-end network switches

The following backend network switches are supported in HCP systems:

  • Alaxala AX2430
  • Cisco® Nexus® 3K- C31128PQ-10GE
  • Cisco® Nexus® 3K-C31108PC-V
  • Cisco® Nexus® 5548UP
  • Cisco® 5596UP
  • Dell PowerConnect 2824
  • ExtremeSwitching VDX® 6740
  • ExtremeSwitching 210
  • ExtremeSwitching 6720 - SAIN systems only
  • HP 4208VL
  • Ruckus ICX® 6430-24
  • Ruckus ICX® 6430-24P HPOE
  • Ruckus ICX® 430-48

Supported Fibre Channel switches

The following Fibre Channel switches are supported for HCP SAIN systems:

  • Brocade 5120
  • Brocade 6510
  • Cisco MDS 9134
  • Cisco MDS 9148
  • Cisco MDS 9148S

Supported Fibre Channel host bus adapters

These Fibre Channel host bus adapters (HBAs) are supported for HCP SAIN systems:

  • Emulex® LPe 32002-M2-Lightpulse

    (firmware version 12.4.243.17, boot BIOS 12.4.243.13)

  • Emulex® LPe 11002-M4

    (firmware version 2.82a4, boot BIOS 2.02a1)

  • Emulex® LPe 12002-M8

    (firmware version 1.10a5, boot BIOS 2.02a2)

  • Emulex® LPe 12002-M8 (GQ-CC-7822-Y)

    (firmware version 1.10a5, boot BIOS 2.02a2)

  • Hitachi FIVE-EX 8Gbps

    (firmware version 10.00.05.04)

Issues resolved in this release

The following table lists the issues resolved in HCP v9.0. The issues are listed in ascending order by reference number.

Reference NumberSR NumberDescription
HCP-3303201381288Resolved an issue that prevented zero-copy failover from triggering when both fibre paths were lost
HCP-33199Resolved an issue where an HCP node was unresponsive under heavy load when using the S3 gateway with AWS v2/v4 authentication
HCP-3335801258209To prevent possible upgrade and node recovery failures, added logic to the upgrade and node recovery processes to verify the state of index volumes
HCP-33382

00541835

01130910

Resolved an issue where objects PUT then DELETE before replication caused replication batch failures
HCP-33526

00541835

00746713

00776503

Resolved an issue where replication spins on metadata.NoSuchExternalFileException exception
HCP-3384500983351Resolved an issue where advanced downstream DNS configuration mode was set, nodes were successfully added to an HCP cluster, yet the DNS configuration was not updated
HCP-33909

00800531

01545968

Patched the Linux kernel to fix an LACP issue
HCP-34095Resolved an issue where the HTTP response code is wrong when invalid XML is sent as part of completing a multipart upload request

HCP-34159

HCP-35429

Resolved the "Verifying DNS configuration" upgrade check to prevent false error reporting
HCP-34438Resolved an issue, found in HCP 8.2, where running the HCP management API commands to obtain node statistics can return a bad value (-1)
HCP-34542Added support for ISO-8601 basic and extended date formats when setting object retention
HCP-3479500858915Resolved an issue when scavenging object metadata from an HCP S Series Node
HCP-3519601570377

The time zone data package for Fedora is updated to tzdata-2019c

Note: Online upgrades can sometimes fail in regions where time zone rules have changed. As a workaround in such instances, either change the HCP system time to UTC, or perform an offline upgrade.

HCP-3522601425742Resolved an issue where JVM rollbacks on HCP nodes resulted in "Fatal exception while trying to execute DELETE" error
HCP-35367Fixed system logging to avoid spam between active/active replication and the Storage Tiering service
HCP-35415Resolved an error with the internal time server by reenabling mode 7 support in ntpd
HCP-35457

Updated the HCP G10 Hardware Setup tool

BMC and LSI firmware are updated.

HCP-35616

HDS02362606

HDS04005699

01638405

Resolved an issue where large CPU utilization spikes from Java threads occurred after upgrading to HCP v8.2 or later
HCP-3569601234648Added an object name to RepairSupport.increaseRedundancy error logging to aid in future triage efforts
HCP-3576401535226Resolved an issue over the CIFS protocol that caused file operations on files larger than 512 MB to fail
HCP-3595201432556

Resolved an issue that prevented a replication link from resuming after it automatically paused due to duplicate namespaces on each side of the link

Note: You might need to click the Resume Link button twice to fully repair the replication link.

HCP-35966

HCP-36067

01560538

01579992

01643497

Resolved an issue with virtual and management networks on a different subnet than the [hcp_system] network
HCP-35967

Resolved an issue where retention hold changes made by means of the HCP S3 extension were not being replicated

In an active/active replication configuration, this could result in deletion of a held object.

HCP-3597501535745Resolved an issue with X-Forwarded-Host header handling
HCP-3598501523962Resolved an issue where HCP incorrectly required the x-amzdate header on signature V2 presigned URL requests when using the Hitachi API for Amazon S3
HCP-3599801545159Resolved an issue with single sign-on (SSO) to the HCP Search Console
HCP-3600901354827Resolved an issue where the metadata query engine was not working, resulting in an indexer state of PAUSED_BY_ERROR
HCP-3659401364446Resolved an issue where nearly full S10 and S30 storage caused tiering issues, even when other S10 and S30 nodes in the pool were not full
HCP-36630

01671178

01755325

Resolved an issue, after upgrading to HCP v8.3 or later, related to monitoring hardware switches
HCP-3668601671454Resolved an issue, reported in HCP 8.3, where attempting to modify a network alias while using the System Management Console returned an error

Compatibility issues introduced in HCP 8.2 or later

The following table lists the compatibility issues introduced in HCP v8.2 or later. The issues are listed in ascending order by reference number.

Ref. numberDescriptionVersion introduced in

HCP-33074

HCP-35329

In HCP v8.2, the HCP software was upgraded to Jetty v9. The upgrade introduces several security enhancements that might impact some deployments:
  • HCP no longer supports SSL v1, v2, and v3 protocols.
  • HCP conforms more closely to RFC 7230, and no longer allows header folding.
HCP v8.2
HCP-33583HCP now requires that the x-amz-date header value is within 15 minutes of when HCP receives the Hitachi API for Amazon S3 request.HCP v8.2
HCP-33672HCP now validates x-amz-date headers on appropriate Hitachi API for Amazon S3 requests.HCP v8.2

Known issues

The next table lists known issues in the current release of HCP. The issues are listed in order by reference number. Where applicable, the service request number is also shown.

Reference NumberSR NumberDescription
HCP-804

HCP Data Migrator can set the value of the hold parameter to true, but not to false

HCP Data Migrator can be used to place an object on hold by updating the system metadata for the object to set the hold parameter to true. However, you cannot use the HCP Data Migrator to remove a hold from an object because the HCP Data Migrator cannot set the value of the hold parameter to false.

HCP-5153False log messages with lowest-numbered node addition When a new node is added to an HCP system, a message about it is written to the system log. If the number of the new node is lower than that of any existing nodes, the same message is written for each existing node, as if it were newly added.
HCP-5179

Browser caching

When an object is added to a namespace, deleted, and then added again with the same name, it may appear to have the old content when viewed through a web browser.

Workaround: To see the new content, clear the browser cache. Be sure to use the applicable browser option to do this rather than restarting the computer.

HCP-7043

Displaying UTF-16-encoded objects

Objects with content that uses UTF-16 character encoding may not be displayed as expected due to the limitations of some browser and operating system combinations. Regardless of the appearance on the screen, the object content HCP returns is guaranteed to be identical to the data before it was stored.

HCP-7108

Node restart with cross-mapped storage

In SAIN systems, if a cross-mapped node restarts while one of its physical paths to the storage array is broken, the node remains unavailable.

Workaround: Fix the broken path and restart the node from the System Management Console.

HCP-8385

Exposed internal mechanism for dead properties for collections HCP uses an internal mechanism for storing WebDAV dead properties for a collection. This mechanism entails the creation of a dummy object named .webdav_properties. This object is inappropriately:

  • Included in the count of objects in the namespace
  • Exposed through the HTTP, CIFS, and NFS protocols
  • Returned by searches for which it meets the search criteria

If you are storing dead properties for collections, do not delete any .webdav_properties objects.

HCP-8570

Missed log messages when no leader node

Normally, one node in an HCP system is responsible for writing messages to the system log. This node is called the leader node. Rarely, brief periods occur during which no leader node exists (for example, because the leader node has failed and a new leader node has not yet been established). During such periods, messages for which the leader node is responsible are not written to the log.

HCP-8665

Shredding in SAIN systems

In SAIN systems, HCP may not effectively execute all three passes of the shredding algorithm when shredding objects. This is due to the fact that some storage arrays make extensive use of disk caching. Depending on the particular hardware configuration and the current load on the system, some of the writes from the shredding algorithm may not make it from the cache to disk.

HCP-9212

Log display skips messages

When you page through a display of log messages in the System Management Console or Tenant Management Console, some messages may be skipped. This happens because the Console retrieves the next or previous group of messages based on the message timestamps.

Each time you request a next page of messages, the Console starts the new page with the message with the next later timestamp from the last message on the current page. If a page boundary falls between multiple messages with the same timestamp, retrieving messages starting with the next timestamp skips the messages that come after the page break. The equivalent process happens when you request a previous page of message.

As additional messages are added to the log, the page boundaries change, with the result that previously skipped messages reappear.

HCP-9360

Browser pages for large directories

You can view the contents of a namespace in a web browser through HTTP (default namespace only) or WebDAV. Some browsers, however, may not be able to successfully render pages for directories that contain a very large number of objects.

HCP-11317

Using NFS to delete objects open for read

Using NFS, if you try to delete an object that is currently open for read on the same client, HCP returns this error: Read-only file system.

HCP-11667

Appending to objects on unavailable nodes

If an object is open for append on a node that becomes unavailable, attempts to append to the object fail.

HCP-12089

Cannot ingest very large email attachments

HCP fails to ingest email attachments substantially greater than 400 MB. In such cases, the client receives a 221 return code.

HCP-13183

SNMP version 2c traps sent for version 3 traps

HCP can be configured to use SNMP version 3. However, when configured this way, HCP sends version 2c traps instead of the expected version 3 traps.

Workaround: To receive traps from HCP, have your SNMP application accept SNMP version 2c traps.

HCP-13574

WebDAV does not correctly list objects with custom metadata

Namespaces can be configured to store WebDAV dead properties in custom-metadata.xml files. If regular custom metadata is stored for one or more objects in a directory before this configuration is set, subsequent WebDAV requests for listings of that directory fail with an XML parsing error.

Workaround: Do not use custom-metadata.xml files to store WebDAV properties for an object if any objects in the same directory already have custom metadata.

HCP-16516

Using Internet Explorer, cannot log in to HCP as a local user

With Internet Explorer, if the Active Directory user account with which you’re currently logged into Windows is not an account that’s recognized by HCP and any of these applies, Internet Explorer displays a Connect window instead of the page with the link to the login page for the target interface:

  • You are trying to access the System Management Console, and support for Active Directory is enabled at the system level.
  • You are trying to access the Tenant Management Console for an HCP tenant, and Active Directory is enabled as an authentication type for the tenant.
  • You are trying to access the Namespace Browser for an HCP namespace, and Active Directory single sign-on is enabled for the namespace.

If you enter credentials for an HCP user account in the Connect window, Internet Explorer returns an error message.

Workaround: To access the target interface using an HCP user account, click on the Cancel button in the Connect window to display the page with the link to the login page for the target interface.

HCP-18233

Changed computer account not added to all applicable groups in Active Directory

When you enable HCP support for Active Directory, the HCP computer account you specify is automatically added to the groups in Active Directory that include the user account you specify. If you then remove the computer account from one or more of those groups and reconfigure Active Directory support with a new computer account, the new computer account is not automatically added to the groups from which the previous computer account was removed.

Workaround: Do not remove the old computer account from the groups in Active Directory until after you have changed the computer account in HCP. If you have already removed the old computer account from one or more groups, resubmit the Active Directory configuration in HCP without changing the computer account. This puts that computer account back in the groups from which it was removed. When you subsequently change the computer account in HCP, the new computer account will be added to all the groups that include the user account.

HCP-18352

HCP unresponsive after Active Directory cache cleared while Active Directory is unavailable

If you clear the Active Directory cache while HCP cannot communicate with Active Directory, the HCP system becomes unresponsive for up to ten minutes.

HCP-18654

No success or error message in response to action taken in Console

Occasionally, the System Management Console and Tenant Management Console do not display any success or error messages in response to an action that results in a fresh display of the page on which the action was taken.

HCP-19123

Objects incorrectly reported as irreparable or unavailable after data migration

During a data migration, the migration service may incorrectly report one or more objects as irreparable or unavailable. After the data migration is complete, you can run the Content Verification service to clear these errors.

HCP-19128

Downloads with HTTPS fail in Internet Explorer 9

With Internet Explorer 9, attempts to download files (such as chargeback reports and SSL certificates) from URLs that use SSL security (that is, URLs that start with HTTPS) fail.

Workaround: In Internet Explorer 9:

  1. On the Tools menu, select Internet Options.
  2. In the Internet Options window, click on the Advanced tab.
  3. On the Advanced page, under Security, deselect the Do not save encrypted pages to disk option.
  4. Click on OK.
HCP-20401

Node restart due to large element content in annotation

While XML checking of custom metadata is enabled, if an annotation is added to an object where the content of an element in the annotation is very large, a node may restart itself.

Workaround: Disable XML checking for the namespace that contains the object.

HCP-20706

DPL 2 object object copies stored on same node

If a DPL 2 namespace service plan is configured so that the namespace stores one object copy on primary running storage and the other copy on an external storage volume, both copies can be stored on the same node, which can cause the object unavailablility if the node fails.

HCP-20827

Delayed read from replica when external storage unavailable

In a replicated namespace, if the only copy of the data for an object is in external storage and that storage is unavailable, NFS and WebDAV requests for the object may time out for several tries before HCP retrieves the object from the replica.

Workaround: Either bring the external storage back online, or retry the request in five minutes.

HCP-21365

Alert about Active Directory connection with online HCP upgrade

In an HCP system that’s configured to support Active Directory, during an online upgrade and for a short time after the upgrade is complete, the System Management Console may show an alert indicating a problem with support for Active Directory. This alert is most likely false and will go away on its own. If Active Directory authentication is working, the alert can be safely ignored.

HCP-21056

Network interface event upon MTU change

When you change the MTU for a network, the network interface may go down and then come back up on nodes that are Dell 1950 servers.

HCP-22241

Username mappings are applied to Active Directory users of HCP namespaces

For the default namespace, Active Directory user authentication is implemented through the use of a username mapping file that associates AD usernames with UIDs and GIDs. If an AD user included in the username mapping file also has access to an HCP namespace, the objects that the user stores in the HCP namespace have the UID and GID specified in the username mapping file.

As a result, a user using CIFS for authenticated access who is included in the username mapping file or a user using NFS has access to such an object only if one of these is true:

  • With CIFS, the UID for the user in the username mapping file matches the object UID.
  • With NFS, the user’s UID matches the object UID.
  • With CIFS, the user is included in the AD group identified by the object GID.
  • With NFS, the user is included in the group identified by the object GID.
  • WIth CIFS, the user has been granted access to the object by the object ACL.
  • The object ACL grants all users access to the object.
  • The minimum data access permissions for the namespace grant all users access to all objects in the namespace.

Users using HTTP, HS3, WebDAV, CIFS for authenticated access who are not included in the username mapping file, or CIFS for anonymous access have access to such an object regardless of the object UID and GID.

HCP-23012

Cannot use ssh -6 to connect to a node using a link local IPv6 address

HCP does not support using SSH to connect to a node using its link local IPv6 address.

This issue is caused by Red Hat bug 719178: Applications can't connect to IPv6 link-local addresses learned through nss-mdns and Avahi.

HCP-23070

False alerts for network with same name as deleted network

If you create a network with at least one node IP address, then delete the network, and then create a new network with the same name as the deleted network and no node IP addresses, the Overview, Hardware, Storage Node, and Networks pages in the System Management Console display alerts indicating that a network error exists. Additionally, HCP writes this message to the system log:

Network interface bond0.xxxx for network network-name is not functioning properly.

When you subsequently assign IP addresses for the network to one or more nodes, the alerts disappear.

HCP-23881

Nodes may fail with the error message “Max Connections hit: Could not get a connection, pool is exhausted

Even if the supported limit of 200 connections is not reached, if too many clients attempt to connect to the same namespace at the same time, one or more nodes in the HCP system may fail with the error message, “Max Connections hit: Could not get a connection, pool is exhausted”.

Workaround: Upgrade to release 7.1 or later of HCP and increase your system RAM on all nodes. At least 32GB of RAM needs to be added.

HCP-24155

When performing an add-drives procedure on an HCP node, an existing node sometimes issues a “barrierWait” message and then hangs

When performing an add-drives procedure on an HCP node, if one node fails, its partner node may issue a "Waiting for others at barrierWait" message and then hang.

Workaround: To get the existing node back into a working state, press CtrlC to cancel the drive addition procedure. You can then restart the procedure.

HCP-24156

Cannot use a domain name to connect to a namespace on an IPv6 or dual-mode HCP network

HCP-DM does not support the use of IPv6 addresses to connect to a namespace on an HCP system.

HCP-DM can use IPv4 addresses to connect to a namespace on a dualmode HCP network. However, if HCP-DM tries to use a domain name to connect to a namespace on a dual-mode network, when the DNS resolves the domain name, it will return both IPv6 and IPv4 addresses for the network. If HCP-DM then tries to use the IPv6 addresses to connect to the namespace, the connection will fail.

Workaround: To ensure that HCP-DM can successfully connect to a namespace on a dual-mode HCP network, you need to configure HCP-DM to connect to that namespace using the IPv4 addresses for the network.

HCP-24436

Clearing the AD cache causes inconsistent directory permissions

Following a clearing of the AD cache on an HCP system that’s accessing AD over CIFS, when users access a given CIFS share, they will find that their file permissions have changed to root/root, with the exception of the first user to access the share following the clearing of the AD cache. That user will see the original permissions on his/her file/folders, but all others will be root/root. All other users that connect to the CIFS share will only see root/root for existing files/folders.

HCP-24472

When using AD for HCP authentication, if a user has an AD username that includes a % character, that user cannot access the HCP system

If you attempt to log into the HCP System Management Console or Tenant Management Console using an Active Directory username that includes a % (percent) character, the HCP user authentication fails.

HCP-24589

When attempting to update annotations for objects that have been tiered to one or more types of cloud storage, HCP sometimes returns a 503 error

When a HCP attempts to update annotations on objects that have been tiered to cloud storage, the updates will fail with a 503 error if HCP is unable to connect to the applicable cloud storage service endpoints or if HCP is unable to access the applicable cloud storage buckets, containers, or namespaces.

Workaround: Restore the connections between HCP and each applicable cloud storage service endpoint and make sure HCP can successfully access each applicable cloud storage bucket, container, and namespace. You should then be able to successfully update the annotations for any objects stored in each bucket, container, and namespace.

HCP-24864

Hitachi Device Manager cannot send updates to an HCP system with IPv6 only mode enabled

An HCP system with IPv6 only mode enabled can successfully connect to the Hitachi Device Manager server, but cannot receive Hitachi Device Manager updates.

HCP-24887

When performing the TrueCopy storage array replication procedure, the san_update command may fail

When using the TrueCopy procedure to replicate the HCP system OS and data LUNs to a different storage array, the san_update command may fail with an error that the file system on the source system differs from the file system on the second.

HCP-25388

While HTTPS is enabled, HCP S Series Nodes fail to create storage components when added to the HCP system by virtual IP

An S Series Node cannot use HTTPS when being added by virtual IP to HCP. While HTTP is enabled, HCP does not create storage components for S Series Nodes added by virtual IP address.

Workaround: On the System Management Console, when adding an S Series Node through virtual IP, go to the Connection tab of the Add Node wizard, deselect the Use HTTP for management option and, under the Advanced panel, deselect the Use HTTPS for data access option before completing the add node procedure.

HCP-25595

Pausing or failing an NFS write operation may cause HCP system processes to hang

Pausing or failing an NFS write operation increases the chances of HCP system processes hanging.

HCP-25602

While the Migration service is running, the migration status occasionally shows incorrect values

Occasionally while the Migration service is running, the migration status values for the total number of bytes being migrated and the total number of objects being migrated are incorrect. This occurs regardless of how many bytes or objects are actually migrated. Once the migration completes, the migration status values become accurate.

HCP-25697

AD 100 Winbind error occasionally causes HCP nodes to restart

HCP system communication errors with AD may cause winbind to restart. If this happens more than 100 times, the HCP system restarts.

HCP-25731

Upgrade NTP to fix vulnerabilities

The NTP that HCP currently uses was discovered to have some vulnerabilities. For information about these vulnerabilities, refer to the NTP security advisory document found here.

HCP-25761

lf your HCP system has data tiered to public cloud, the upgrade process to version 7.1 of HCP is extended

If your HCP system has data tiered to public cloud, metrics need to recompute when upgrading to version 7.1 of HCP. This extends the upgrade time.

HCP-25997

Node recovery does not work for the HCP 500XL with new disks

Node recovery procedures fail with unformatted database drives.

HCP-26037

The Adding Logical Volumes service might fail if adding previously used, formatted LUNs

Occasionally during the add LUN service procedure, previously used, formatted LUNs might not be added to all nodes. If this occurs, the error message, "Failed to execute Partx" appears.

Workaround: Restart the service procedure.

HCP-26043

Incorrectly shutting down and restarting a replication link when updating the signed certificate causes the replication link to fail

If you incorrectly shut down and restart a replication link while uploading an SSL certificate, the replication link refutes the certificate and fails.

Workaround: Follow this certificate upload procedure:

  1. Upload a new certificate on the primary and replica systems.
  2. Remove the expired certificate from the primary and replica systems.
  3. Select the Shut down all links option from the Replication settings menu on the primary and replica systems.
  4. Select Start up all links on the primary and replica systems.
HCP-26058

Upgrading to HCP 7.2 or later prevents HCP from connecting to HCP Data Migrator

Release 7.2 and later of HCP use a different SSL cipher than previous releases. HCP Data Migrator does not support these ciphers if HCP Data Migrator is run with an outdated Java runtime.

HCP-26066

Log download fails under certain conditions

Log download initiated through the System Management Console could fail due to external issues such as networking.

Workaround: Restart the log download.

HCP-26127

and

HCP-26128

HTTPS certificate errors appear during failover

Sending HTTPS requests to system A in a replication link that has failed over causes report certificate errors because the Subject Common Name in the certificate does not match the domain name in the request.

Workaround: Add Subject Alternative Name entries to the certificates used by HCP for HTTPS.

HCP-26158

Under specific conditions, creating an active/active replication link between two systems causes nodes to reboot

Under specific conditions, creating a replication link between two systems running version 7.1.1 of HCP that have ingested objects and have multiple namespaces causes nodes to reboot.

HCP-26775

Cannot download certificates from HCP through Internet Explorer 8

When you try to download a certificate from HCP using Internet Explorer (IE) 8, you may receive the "Unable to download." error message. This is caused by a known I.E 8 issue.

Workaround: For more information on this issue, see the Microsoft Knowledge Base article, https://support.microsoft.com/enus/kb/323308.

HCP-27121

Cannot download HCP system logs during an online upgrade to release 7.2 of HCP

During an online upgrade to release 7.2 of HCP, the HCP system logs cannot be downloaded from the System Management Console.

Workaround: During the online upgrade, access the System Management Console by entering the IP address of a node that has already upgraded into your web browser. Perform the log download procedure through the targeted node.

HCP-27176

The Network page Advanced Settings tab appears blank when the HCP system is read only

When an HCP system is in a read only state, the Advanced Settings tab on the System Management Console Configuration Networks page appears blank.

HCP-27737

HCP system raises full capacity alarm if a single volume is over 95% full

If a single volume in an HCP system becomes 95% full, the full file system warning is triggered for the system.

HCP-27757

Active Directory node account not removed when node retired

Running the retire node procedure on a node with Active Directory enabled does not remove the node computer account from the domain controllers.

Workaround: Remove the node computer account from the domain controllers.

HCP-27810

When switching tabs during a replication schedule update, an incorrect error message is occasionally displayed

After creating an active/active replication link, clicking on the Update Schedule button on the System Management Console Services Replication Schedule page, and switching between the local and remote schedule tabs, an error may appear even though the replication link is working properly.

HCP-27882

After upgrade to 7.2, some third party applications receive HTTP 401 error to PUT requests

With release 7.2 of HCP, SPNEGO changes make certain third party applications incompatible with HCP.

Workaround: Contact Hitachi Vantara Support to enable third-party compatibility.

HCP-29573

Changing HCP-VM network adapter from e1000 to VMXnet3 causes VLAN performance issues

On an HCP-VM with VLANs enabled, converting from an e1000 to VMXnet3 network adapter causes VLAN performance issues.

HCP-29612

HCP-29301

Database connections exhausted

On high-load HCP systems that are balancing metadata, nodes can restart due to exceeding the database connection limit.

HCP-29645

AD falsely report missing SPNs due to replication topology with tenant or namespaces on custom network

In a replication topology where systems have full SSO support, HCP may incorrectly report missing SPN errors for replicating tenants and namespaces that are using a custom network with a non-default domain name.

HCP-30018

Namespace browser cannot load directory due to ASCII characters in object name

The namespace browser cannot display the contents of a directory that contains an object with any of the following ASCII characters in its name: %00-%0F, %10-%1F, or %20.

HCP-30058

HS3 500 Internal Server Error due to double slash (//) in object name

If an object has a double slash (//) in its object name and the object is ingested using HS3, HCP returns a HTTP 500 internal server error.

HCP-30529

Irreparable objects appearing due to migrating objects ingest in HCP release 5.0 to new nodes in HCP release 7.0 or later

If objects with custom metadata are ingested in a system on HCP release version 5.X or earlier and the system is chain upgraded to release 7.0 or later and the objects are migrated to new nodes, the objects become irreparable.

Workaround: Run the Content Verification service between each upgrade in the upgrade chain.

HCP-30649

CR220S system installation or upgrade fails in HCP release version 8.0 due to active processor set to one core in BIOS

If an HCP system uses CR220S servers and has the active processor setting set to one core in the BIOS, the installation or upgrade procedure fails for HCP release version 8.0.

Workaround: Before installing or upgrading an HCP system, set the active processor setting in the BIOS to max cores.

HCP-30765

Replication link shutdown error remains after replication resumes

If all replication links shut down, HCP shows a "Replication Links Shut Down - All activity on all links to and from this system has been stopped" error message on the System Management Console Services

Links page. When the replication links resume, the error message does not go away.

Workaround: Once all replication links resume, create a new replication link to make the error message go away.

HCP-30958

DNS failover fails due to domain name change in active/passive replication link

If a system is in a active/passive replication link and has its domain name changed, the replica system does not receive the updated domain name which causes DNS failover to fail.

Workaround: After you change the domain name for the primary system, update any setting on the tenant overview page to replicate the new domain name.

HCP-31061

HCP vulnerable to brute-force password detection attacks

With an HTTP-based interface (that is, the HTTP REST, HS3, HSwift, and management APIs), if you authenticate using an HCP user account, HCP does not lock out the user account after multiple failed attempts to access the system. Similarly, HCP does not lock out HCP user accounts after multiple failed attempts to change the account password.

Because accounts are not locked out under these circumstances, HCP is vulnerable to brute-force password detection attacks.

Note: With Active Directory authentication, AD lockout policies enforce account lockouts.

HCP-31082

Node restarts slowed due to configuring 32 or more LUNs on CR220 nodes with AMS 2500 arrays

For an HCP system with CR220 nodes and AMS2500 arrays, configuring 32 or more LUNs causes simultaneous node restarts to take longer than normal. The restarts take longer for every extra LUN after LUN 31.

HCP-31097

DNS failover not working for replication link converted from active/passive to active/active

After a replication link is converted from active/passive to active/active, DNS failover no longer works for that link.

Workaround: Delete the active/passive replication link and then recreate it as an active/active link.

HCP-31112Objects left in "VALID, UNREPLICATABLE_OPEN" state and cannot be cleaned up by running garbage collection
HCP-31400

Tar gzip compressed objects fail MD5 check due to Firefox browser issue

Tar gzip compressed objects downloaded from HCP through the Firefox browser fail the MD5 check.

HCP-31431

Links in a geo-protection replication topology can be added to replication chain

Geo-protection replication chains are not supported. If a system in the geo-replication topology becomes unavailable, the geo-protected systems outside of the topology could experience data unavailability

HCP-31488

System restart due to to unavailable node not receiving management network IP address

If a node is unavailable when the management network is enabled, the node does not receive the management network IP address. If any other change is made to the management network, the HCP system shuts down so the node can receive the management network IP address.

Workaround: Only enable the management network when all nodes are available.

HCP-31499

Inconsistent case sensitivity for Hitachi API for Amazon S3 multipart upload query parameters

Case sensitivity is inconsistent among the query parameters used with S3 compatible API requests related to multipart uploads. For example, the uploadId query parameter used in requests to upload a part is not case sensitive, while the uploadId query parameter used in requests to list the parts of a multipart upload or complete or abort a multipart upload is case sensitive.

HCP-31529

System restart fails after changing management network configuration

The HCP system should restart each time a chang is made to the management network configuration, but after enabling the management network for the first time the HCP system does not restart again from changes made to management network configuration.

HCP-31721

and

HCP-29790

Duplicate elimination service cannot deduplicate compressed objects in S Series storage pools

When duplicate objects are compressed in an S Series pool, the compression process creates unique files that the duplicate elimination service cannot deduplicate.

HCP-31841

IPMI v2.0 password hash disclosure

A vulnerability regarding IPMI v2.0 puts password protection at risk. For more information on this, see https://nvd.nist.gov/vuln/detail/CVE-2013-4786.

HCP-31972

Only newly-added disks should be verified in HCP system prechecks

During the HCP system prechecks in the add drive procedure, the disk size of only the newly-added LUN should be verified by the HCP system.

HCP-32018Migration hangs and produces inconsistent status information
HCP-32164Unable to change the name of an S Series component in the HCP System Management Console
HCP-32417System restart is required when adding, removing, and then reading a management port network adapter to an HCP-VM system
HCP-32486The Active Directory whitelist filter is removed when the HCP System Management Console fails to update settings.
HCP-3255500294339Watchdog timer causes premature soft lockup panic
HCP-32818

Complete multipart upload operation failure due to generated ETAG

The ETAG generated by HCP on a complete multipart upload operation is based on the default namespace hash scheme. This causes complete multipart upload operations to fail with TSM.

Workaround: Use MD5 as the hash scheme for writing to a namespace with TSM.

HCP-32819

AWS SDK failure due to invalid Content-Type

When an invalid Content-Type request header is specified, this causes the AWS SDK to fail.

HCP-32845

Incorrect information included in object configuration files

In certain object configuration files, incorrect information is included.

HCP-32848

Delete old database procedure hangs.

When administering namespaces with 100,000 objects or more, the Delete Old Database procedure is known to run indefinitely and display #, even though the deletion has completed.

HCP-32856

Search Security Deny List not working correctly

In the HCP System Management Console, the Deny List on the Search Security page does not deny access to the clients listed.

HCP-32900

In the HCP System Management Console, the Hardware page does not report the status of the management NIC when the NIC has been enabled and subsequently removed on the VM node

If the link status of the management port network connection fails on a VMware ESXi server or KVM host, HCP cannot detect the link failure and does not raise any corresponding alarms. There should not be complete isolation from HCP management access since there are at least three other nodes providing management port network connectivity.

HCP-32957Metadata query engine with sort option causes Apache Solr Java Virtual Machine to run out of memory
HCP-33427

After upgrading to HCP 8.2, some third-party applications receive an HTTP 401 error to PUT requests

With release 7.2 of HCP, SPNEGO changes make certain third-party applications incompatible with HCP.

Workaround: Contact Hitachi Vantara Support to enable third-party compatibility.

HCP-33541Active/passive replication link schedule does not adjust for systems located in different time zones
HCP-33980Some metadata headers are processed inconsistently between AWS S3 and HCP
HCP-34203Capacity calculations and UI display are inconsistent between HCP and HCP S Series Node
HCP-34207Faulty SSD drives can cause a failure when adding a new SSD volume to HCP
HCP-3422201219400During an online upgrade, the event-based retention field is left empty for certain namespaces
HCP-34333

01247729

01247736

In an HCP cluster that contains an S Series storage component, when an outage in the cluster leader node occurs, communication with the storage component fails, and HCP reports an error
HCP-3438801224371

When a zero-copy-failover partner node reboots after a failover, the metadata query engine does not recover

Workaround: Edit the following files:

  • In the /opt/arc/solr/solr/solr.xml file, add the shards that are on the standby volumes.
  • In the /opt/arc/solr/solr/cores file, create symlinks that point to the shards on the standby volumes.
HCP-3451501312806Major capacity of the /var file system contains log downloads
HCP-34516

01312806

01310161

Overflowed, thin-provisioned block storage might cause data loss

Workaround: Do not over provision dynamic pools.

HCP-3476401309564After disabling CIFS on an HCP namespace, the Windows client connection remains active, and objects are written to the root (/) file system
HCP-34982

In the HCP Search Console UI, the login ID changes to null and a subsequent search returns "500 Error: Internal server error"

When you open the Tenant Management Console from the System Management Console, initiate a search by logging in to the Search Console with your system-level credentials, and either refresh the page or click the search button, the following events occur:

  • You are returned to the login page.
  • The login ID changes to null.

If you log in to the Search Console again with your tenant-level credentials and initiate a search, the query returns the following error message:

500 Error: Internal server error

Workaround: Depending on the circumstances that led to this error, complete the first or both of the following steps:

  1. On the Security page of the System Management Console and Tenant Management Console, keep the Log users out if inactive for more than value the same.
  2. If you initiated a search and then refreshed the page before the results were displayed, clear cookies in your browser window. Then, log in to the Search Console again with your tenant-level credentials.
HCP-34993

01354829

01331997

Policy state of over 1 million objects causes node reboots
HCP-3502701415199Migration finalization might timeout and require a restart
HCP-3508901426836Zero-copy failover failback might leave behind stall mount points
HCP-35550

00704591

00781620

When versioning is enabled, replication cannot continue if an object that exists on the destination cluster is pruned or empty
HCP-3600101410508Node recovery during an online upgrade procedure targets a healthy node
HCP-3663201547564Multipart upload fails in the FileOpenForWriteIndex.suspendAndSwap function and returns "Attempt to suspend and swap a multipart upload file handle" error
HCP-3679801709881

SNMP returns the incorrect replication link name

Workaround: Use the HCP Management API to return the correct replication link name.

HCP-36808

01377407

01448466

HCP continues to report "Metadata query engine usage is at 100%" several days after the Maximum Allowed Size on Shared Volumes field value is increased

The metadata query engine index size is inconsistent with the actual space taken by the index. A symptom of this issue is that the metadata query engine frequently runs out of memory, and the following error is reported on the Overview page of the System Management Console: Metadata query engine ran out of memory for indexing

Accessing product documentation

Product user documentation is available on Hitachi Vantara Support Connect: https://knowledge.hitachivantara.com/Documents. Check this site for the most current documentation, including important updates that may have been made after the release of the product.

Getting help

Hitachi Vantara Support Connect is the destination for technical support of products and solutions sold by Hitachi Vantara. To contact technical support, log on to Hitachi Vantara Support Connect for contact information: https://support.hitachivantara.com/en_us/contact-us.html.

Hitachi Vantara Community is a global online community for Hitachi Vantara customers, partners, independent software vendors, employees, and prospects. It is the destination to get answers, discover insights, and make connections. Join the conversation today! Go to community.hitachivantara.com, register, and complete your profile.

Comments

Please send us your comments on this document to doc.comments@hitachivantara.com. Include the document title and number, including the revision level (for example, -07), and refer to specific sections and paragraphs whenever possible. All comments become the property of Hitachi Vantara LLC.

Thank you!