Skip to main content
Hitachi Vantara Knowledge

Hitachi NAS Platform 14.6.7520.04 Release Notes

About this document

This document (RN-92HNAS057-00, April 2023) provides late-breaking information about NAS Platform 14.6. It includes information that was not available at the time the technical documentation for this product was published, as well as a list of known problems and solutions.


This document is intended for customers and Hitachi Vantara partners who license and use NAS Platform.

Accessing product documentation

Product user documentation is available on the Hitachi Vantara Support Website: Check this site for the most current documentation, including important updates that may have been made after the release of the product.

Accessing product downloads

Product software, drivers, and firmware downloads are available on the Hitachi Vantara Support Website:

Log in and select Product Downloads to access the most current downloads, including important updates that may have been made after the release of the product.

About this release

This release is a minor release that adds features and resolves multiple known problems.

The specific build is server update (SU) 14.6.7520.04, and system management unit (SMU) 14.6.7520.04.

NAS operating system, which includes server update 14.6.7520.04 and SMU 14.6.7520.04, supports the following models:

         Hitachi NAS Platform 5200, 5300

         Hitachi NAS Platform 4040, 4060, 4080, 4100

The topics in this document could also be relevant to VSP F/G Series (running SVOS 7.4.0), and VSP N Series (running SVOS 7.4.1), by taking note of the NAS module version.

Note: When upgrading to 14.6, it is advisable to refer to the corresponding release notes of each intervening version to be aware of any new features, special notes and considerations.

Document history




Initial release of SU version 14.6.7520.04

New features

This section describes the key features in version 14.6, and other recently released features. Please refer to the NAS user guides for details on using these features.

For features introduced after the initial 14.6 release, which may not be covered in the published guides, documentation amendments can be found on the Additional Notes page. This page is linked to from the main NAS Platform documentation page (

File system packing

Updated in 14.6.7520.04

This feature, which was first available in 14.5.7413.01 as a technology preview, is now enabled by default on HNAS 5000 series servers for all newly formatted file systems.

This feature reduces the amount of disk space required for storing a particular class of file system metadata (specifically inodes) and small files. The amount of saved space is included as �Reduction� in the �df� CLI command output, in a column previously used by, and now shared with, deduplication.

It will not be possible to mount a packed file system on any other shipping type of Hitachi NAS product or on an HNAS 5000 series running a release below 14.5.


Updated in 14.6.7520.04

HNAS native REST API version increased to v8.1, which adds the ability to manage more functional areas of the HNAS product. v7 of the native REST API is retained and still supported.

The legacy v4 and v7 REST APIs have been removed and can no longer be enabled.If the HNAS system has not yet been changed to use the native REST API, then upon upgrade, the REST API will be disabled.To avoid this situation, either ensure the switch to native mode is made before the upgrade, or make sure to re-enable the REST server after the upgrade.

Support for 8 node clustering for HNAS 5300

First available in 14.3.7221.03

Support has now been added for 8 node clusters on HNAS 5300.

NDMP direct-attach tape support for HNAS 5200 / 5300

First available in 14.2.7117.05

HNAS 5200/5300 now has the provision to attach tape drives, for NDMP backup.

Hitachi NAS add-ons

There are several add-ins available for use with Hitachi NAS, as noted here.

The downloads can all be found by following section "Accessing product downloads" and navigating to "Hardware Download", "NAS Platform", and then selecting "Add-ons".

The documentation can be found on the "Solutions and Best Practices" page, which is linked from the main NAS Platform documentation page (

HNAS CSI Driver for Kubernetes

Version 1.2.0 (October 2022) - works with NAS 13.3 or later

The Hitachi NAS Container Storage Interface (CSI) Driver is a software component with libraries, settings, and commands that can be used to create persistent storage for containers. It enables the stateful applications to persist and maintain data after the life cycle of the container has ended. The Hitachi NAS CSI Driver provides persistent volumes on Hitachi NAS server platforms (Hitachi NAS platform and NAS module) and is able to clone those volumes and take snapshots of them.

As the driver relies on the ability for containers/pods to access HNAS NFS exports, it can only be used on Linux based systems. This driver requires Kubernetes 1.20 or higher.

Version 1.00 (August 2020) still works, and can work with older Kubernetes versions, but contains less functionality.

Hitachi NAS Modules for Red Hat� Ansible�

Version 1.1.0 (September 2021) - works with NAS 13.5 or later

Hitachi NAS Modules for Red Hat Ansible allow IT and data center administrators to automate and manage some of the configuration of Hitachi NAS systems. An administrator can create playbooks together with logic and other Ansible modules to automate complex tasks. Administrators can filter, sort and group the information by piping the output from one module to another. Tasks are executed by running simple playbooks written in yaml syntax.

These modules require Ansible 2.9 or higher.

HNAS docker volume plugin

Version 1.00 (December 2019) - works with NAS 13.2 or later

The NAS server platform (Hitachi NAS platform and NAS module) can be used to provide remote storage for container images running within Docker.

As the plugin relies on the ability for containers to mount HNAS NFS exports, it can only be used on Linux-based systems.

The plugin is supported on Docker version 18 and newer and currently only on stand-alone systems, rather than clusters/docker swarm.

ELK integration for HNAS

Version 1.00 (September 2019)

The NAS server platform (Hitachi NAS platform and NAS module) can be integrated with Elasticsearch. Alert and audit logs can be collected, and then analyzed using Kibana, which helps to visualize data.

Elasticsearch is commonly referred to as the ELK stack or Elastic stack, which refers to Elasticsearch and associated components, which reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real-time.

Splunk add-on for HNAS

Version 1.00 (November 2018)

The NAS server platform (Hitachi NAS platform and NAS module) can be integrated with Splunk�. Splunk can be configured to collect alert logs, audit log events, and gather statistics about the NAS server system performance.

Special notes on current NAS releases


Update for CVE-2022-38023 "Netlogon Elevation of Privilege Vulnerability". First available in 14.6.7520.04

Added support for the sealing of secure RPC for NetLogon connections. Server/DC Netlogon connections are now secured using RPC sealing.

Preparation for Domain Controller Windows update is discussed in

Minimise the time of having mismatched software versions

A change in version 14.6.7520.04 in the behavior of the nis-ldap-mode command, which is invoked programmatically in a number of situations, results in a cluster-wide management deadlock opportunity during rolling upgrade.Minimize the time a cluster spends with both a node running a version before 14.6.7520.04 and a node running that or a later version.

For the maximum safety, do not gather diagnostics or a performance-info-report (pir) during a rolling upgrade or initiate a rolling upgrade when Hitachi Remote Ops (HRO) is liable to gather diagnostics or the SMU is liable to take a dailyshowall (00:08 local).If that cannot be avoided, use "evsmap autofb off" to disable automatic balancing of EVSs for the duration of the rolling upgrade, so that the Admin Service and serving EVSs remain on the same node as one another, until all nodes are on the same version.

D154783 contains further details, including how to recognize that this problem has occurred and how to recover.

Virus scanning and SMB2 lease break interaction

In some circumstances, SMB2 clients will ignore lease break requests. When a lease break initiated by a virus scan is ignored, SMB2 clients can experience delays in opening files.

A configuration option has been provided to avoid these delays by preventing virus scanning appliances from causing some lease breaks.

The configuration option - "allow-virus-scanner-to-break-oplocks" - should be set to "false" in the appropriate EVS security context in order to enable the newly-implemented behaviour.

Access of files on the HNAS via SMB2 from the virus scanning appliance must be limited to the virus scanner service when the HNAS has this configuration enabled.

Note: When changing configuration options, it's often important to consider the EVS security context the change will apply to. Please review the "set" and "set-for-evs" man pages when considering such a change.

Configuring external migration targets

Not specific to this release, but reiterating the need for adequate backup planning.

Caution: Care should be taken when configuring systems with a single migration destination for both replication source and target (known as a triangular arrangement). Such arrangements should not be considered a good solution in any disaster recovery (DR) or backup scenario, as there is only a single copy of the user data pointed to by XVLs at each end of the replication policy.

Deduplication support for Object Replication Targets

Deduplication is supported on Object Replication target file systems, from release 13.6.

Note: If, before 13.6.6016.05, a filesystem was created to support dedupe and it was later used as a replication target, there will be implications when upgrading to 13.6.6016.05 or later. In this case, deduplication of the replication target will start automatically without any additional action on the user's part.

In order to avoid this happening, deduplication should be disabled, per filesystem, before upgrading and remain off after the upgrade.

NFS over UDP

If NFS over UDP is enabled, frequent warning messages are displayed on the console and in the dblog. As a workaround, disable UDP. Note that the messages will persist until the clients are remounted.

Note: Using NFS over UDP has inherent risks and is not recommended.

Group Augmentation changes

A change in 13.5.5527.02 changed the format of the output that create-group-table-from-active-directory.rb presented to any customized massage-commands-for-managed-servers script.

Suppose a customized massage-commands-for-managed-servers script is used to check the output against a whitelist. In that case, groups will likely be incorrectly excluded, and their old definitions will continue to be used by HNAS indefinitely. In this instance, it is best to transform the whitelist to suit the output after the upgrade.

HDRS versions

Version 4.x - VSP F/G/Nx00 platforms only.

         CentOS Stream 8 will be supported by HDRS v4.3.

         A change in 13.8 necessitates that any instances of HDRS in use must be upgraded to at least v4.x.

         Please do not upgrade the SMU software to 13.9.6628 or later on the VSP F/G/Nx00 platforms, or install a net new GEfN solution on this platform until HDRS v4.2 or later is installed.

Version 5.x - HNAS 5000 series platforms only.

         CentOS Stream 8 is not supported.

         Existing deployments must be migrated to 6.x.

         See the HDRS 6.x Release Notes for more details on migration procedure.

Version 6.x - HNAS 5000 series platforms only.

         CentOS Stream 8 is required.

         For existing deployments, see HDRS 6.x Release Notes for more details on migration procedure.

         A change in SMU version 14.4 involves a Python library update which requires HDRS 6.1.1 to complete. HDRS 6.1.1 new installs require SMU version 14.4 or greater.

         In existing HDRS installs the recommended sequence is as follows: (1) update SMU version first, then (2) update HDRS to 6.1.1. Please read the HDRS 6.1.1 Release Notes for more information.

HNAS 5000 series GEfN



HNAS 5200/5300 clustering

There was a restriction for HNAS 5200/5300 in version 13.9.6420 to limit the cluster size to 2 nodes.

Version 13.9.6628.07 introduces support for 4-node clusters on HNAS 5200/5300.

Version 14.3.7221.03 introduces support for 8-node clusters on HNAS 5300.

Note: For data availability, clustering in a production environment is required.

Script output on HNAS 5200/5300

Due to a change in operating system behaviour, on Debian 10 (Buster) based systems such as HNAS 5200/5300, some scripts' output on invocation might not be displayed on the current console. The output can still be found by reviewing the syslog or using the journalctl command.

DSA host keys for SSH access

Since 13.9.6918.02, the HNAS 3000 and 4000 series and the VSP-F/G platforms no longer allow the ssh-dss host key algorithm (i.e., use of the DSA host key).

SMU support for CentOS Stream 8

Since 13.9.6918.05, a virtual SMU can be deployed on the CentOS Stream 8 operating system. Use version 3.0 of the Hyper-V or VMware template in order to create a virtual SMU based on CentOS Stream 8.

From 14.4.7322.04 onwards, a regular warning event will be generated when using the older CentOS 6 SMU, prompting for an upgrade to CentOS Stream 8.

A standard upgrade of an earlier virtual SMU to version 13.9.6918.05 or later will not upgrade the operating system version. To upgrade an existing CentOS 6 SMU to run on CentOS Stream 8, while preserving the existing network address, it is necessary to deploy a new virtual SMU, using the CentOS Stream 8 OVA (SMU-OS-3.0.iso), and migrate the settings from the existing SMU to the new one by performing a backup and restore.

More details can be found in the Virtual SMU Administration Guide MK-92HNAS074.

Note: Both CentOS 6.2 and CentOS Stream 8 are supported in this version.

Note: CentOS Stream 8 is only supported on�

VSP F/G/Nx00 platforms: HDRS v4.3 onwards

HNAS 5000 series platforms: HDRS v6.1.01 onwards

Notes on installing, upgrading, and downgrading

Notes on this release include:

         NAS platform models 4040 / 4060 have cluster support for up to two nodes.

         NAS platform models 4080 / 5200 have cluster support for up to four nodes.

         NAS platform models 4100 / 5300 have cluster support for up to eight nodes.

         The NAS Manager for the SMU uses cookies and sessions to remember user selections on various pages. Therefore, open only one web browser window, or tab, to the SMU from any given workstation.

Note: When upgrading, remember to remove any avoidances already implemented for defects that have been fixed in intermediate releases (i.e. check for the presence of, and the contents of, startup.scr file for old defects that have since been fixed.)

Performing a rolling upgrade from older versions of HNAS

If upgrading from earlier versions of HNAS, note that there are critical steps which must be followed in a precise sequence to correctly upgrade to version 14.6. Refer to the corresponding release notes of each earlier version for details on rolling upgrades. Additionally, consult with an Hitachi Vantara representative for assistance in upgrading from earlier versions of HNAS.

Note: For Rolling Upgrades, upgrade to the latest version of the major code release before upgrading to any version in the following major code release.

As an example, a Rolling Upgrade should only be performed from the latest 13.x code release (v13.9) when moving to any version in the 14.x major code release.

Note: NVRAM mirroring will be suspended during the time that the cluster is on different models of servers.

Please refer to FE-92HNAS050 when planning a hardware rolling upgrade from HNAS 4xx0 to HNAS 5200 / 5300.

Note: When upgrading from a 4100 to a 5000 series proper planning must be in place and followed, as there has been a change in capacity licensing.

Note: When using Hitachi Operations Center, the HNAS 5000 series cannot be on-boarded into Analyzer. This is not an HNAS product issue - HOC Analyzer will fully support the HNAS 5000 series in a future release. In the interim, please contact product support for any potential work around until HNAS 5000 series is fully supported in HOC Analyzer.


File-based replication between different HNAS software levels

The ability to replicate between systems is determined by the version of the Software that is running on those systems. The model number of the server is not a factor for interoperability for replication purposes. If both the destination and target servers are running the same major software version (for example, 13.x), replication as "managed servers" is fully supported, but not recommended as this has repercussions when implementing HRO reporting. If the destination and target servers are running different major software versions (for example, 13.x to 14.x), one of the servers is configured as an "unmanaged" server. Replication continues to be fully supported within the constraints of replication between managed and unmanaged servers.

Object-based replication between different HNAS software levels

Object replication was first introduced in HNAS software v8.0 and has been improved with each release. For example, version 10.1 was enhanced so that objects maintained their sparseness during incremental replication. Version 11.1 can preserve file clone states during replication. To ensure interoperability, feature flags are negotiated when object replication occurs between servers running at different version levels.

Object replication between servers is supported up to one major version away. For example, object replication between version 13.x and 14.x is supported.

Note: Object replication between servers with more than one major release difference may work (for example, between version 12.x and v14.x) � but this is not supported.

Note: When set to transfer XVLs as links, both source and target systems involved in the replication relationship must be running HNAS release v13.4 or later.

Important considerations to read before installation

Please read the following sections before installing and using 14.6.

Special consideration should be taken when upgrading to the stated versions (or later) from an earlier version or planning a downgrade from the stated versions (or later) to an earlier version.

Changes in 13.0

         Support for WFS-1 is now completely removed. Before upgrading, the customer MUST migrate any WFS-1 filesystems to new WFS-2 filesystems, as WFS-1 filesystems cannot be mounted.

         NAS Storage Pools (spans) are now limited to 32 filesystems.

         12.7.4221.07 is the lowest version of code to which a system can safely downgrade.

Changes in 13.2

         Support added for increasing the number of filesystems in a cluster must be considered when planning a downgrade to an earlier version if more than the previous default of 128 filesystems exist.

         Support for REST API v4 added while still supporting v3.

         13.2.4527.04 introduced a new command, krb5-nfs-principal-format. If the setting is changed to (the non-default value of) "only-primary" for any EVS, this must be considered when planning a downgrade to an earlier version.

Changes in 13.5

         Support for REST API v7 was added, while still supporting v4, and deprecating v3.

Changes in 14.5 / 14.6

         See notes on "File system packing" under "New features".

The number of filesystems per span limit

By default, the number of filesystems created in any span is limited to 32.

If an existing span has more than 32 filesystems, the span and filesystems are fully supported after upgrading to 13.0 or later. However, it is impossible to create any additional filesystems on the span, until enough filesystems have been deleted to bring the total number below 32.

It is possible to increase this default value using the filesystem-create CLI command with the --exceed-safe-count option. This option must not be used when creating up to 32 filesystems. It must only be used when creating filesystems beyond the 32nd one.

Note: This option is only available on the CLI. The NAS Manager does not permit the creation of more than 32 filesystems.

For further information, see the File Services Administration Guide.

NFSv3 access during upgrade to 13.2 or later

When a cluster namespace (CNS) is used on an NFSv3 filesystem, a rolling upgrade to version 13.2 can cause longer transient delays for NFSv3 accesses than usual. Customers using ordinary filesystem exports or other protocols (including NFSv4) do not experience these additional delays.

Note: This issue only affects the upgrade from a pre-13.2 release to a 13.2-or-later release. Future upgrades will not experience any additional transient delays from this issue.

The technical issue

Typically, during a rolling upgrade, access to filesystems through NFSv3 and CNS is available while EVSs are migrated between cluster nodes so each node can be upgraded. Clients can connect to an EVS on a node running older software and access filesystems belonging to an EVS on a node running newer Software (or the other way around) because the NAS server uses a stable message format when forwarding the requests.

Software version 13.2 supports an increased number of filesystems and, to provide this feature, modifies the message formats used to support CNS in a way that is incompatible with earlier releases.

During this rolling upgrade, clients cannot access filesystems that are hosted on a node running a different version of Software to the currently connected node. As soon as the EVSs are migrated onto nodes running the same version of Software, the clients can regain access to those filesystems.


For 2-node clusters (including NAS Modules), follow the usual upgrade procedure. After the first node has been upgraded, and while EVSs are being migrated between the nodes, there is a longer interruption to client access than usual. The interruption ends as soon as all EVSs are migrated to the upgraded node. When the second node has been upgraded, the only disruption is from normal EVS migrations.

For clusters with three or more nodes, there could be a longer period when EVSs are hosted on nodes running different software versions. For these cases, use manual migrations to move all EVSs to nodes running the same software version. This minimizes the period during which the clients cannot access all filesystems.

For details of the manual migration process, or for upgrade procedures, please contact Customer Support.

SMU, server, and cluster compatibility

These release notes highlight SMU release version 14.6.7520.04.

The version of SMU should always be equal to, or newer than, the version of the server / cluster being managed. The closest available one should be used in the rare situation where such an SMU build is not released.

Since SMU 13.9.6918.05, the following hypervisor images are supported for a virtual SMU

         Hyper-V: Virtual SMU OS 2.2 or 3.0

         VMware: Virtual SMU OS OVA 2.1, 2.2 or 3.0

         Use the 3.0 version to deploy a virtual SMU on CentOS Stream 8 instead of on CentOS 6.

Note: In addition to the VMware player, the virtual SMU (vSMU) is also compatible with the free version of ESXi.

Note: VMware vSphere 6.5 and 6.7 have been set to EOS by VMware and so vSphere 7.0 is the recommended version to use

From SMU 12.7, a virtual SMU can support up to ten (10) servers/clusters. The VM's resources must be increased for the vSMU to manage more than two (2) entities from a virtual SMU. One (1) GB memory and one (1) virtual CPU are required per entity. An entity is defined as a single node or a cluster of nodes.


New license keys are typically firmware-version specific. Upon upgrading the firmware to this release, all previous licenses on the system will remain in force.

Licensing as it pertains to node replacements

Clustered Node Replacement: Once the NAS cluster has been built, the Cluster MAC-ID will not change regardless of which node in the cluster needs to be replaced, so there will not be any reason to request new license keys when replacing a node in a cluster.

Single Node Replacement: When a single node must be replaced, the original license keys will not be valid on the new node. Contact TBkeys to transfer the license keys to the replacement node and issue a new permanent license. Provide TBKeys with the original Node's MAC-ID and the Replacement node's MAC-ID.

To request upgrade keys

When ordering license keys for new, licensed features, note that:

         The customer will purchase new features with a sale price per standard Hitachi Vantara channel policies and procedures.

         Non-sale feature requests will be routed based on server branding until the relicensing process has been fully integrated.

         Hitachi Vantara Server Request Routing

o   The emailed request shall include the following information:

-        Customer Name

-        MAC-ID of the HNAS Unit (the MAC-ID format is XX-XX-XX-XX-XX-XX), the serial # is not needed or acceptable to issue new keys.

-        If the standard upgrade procedures have not been followed, indicate details of the current situation and if a new full set of keys is required. Also, if the server is part of a cluster, please indicate if the MAC-ID is a "Primary" server of the cluster and how many units are in the cluster.

o   All permanent upgrade key requests will be handled by way of email sent to Turnaround time on all requests is targeted within 24 hours. Standard working hours for this distribution list (dlist) are 8am to 5pm Pacific Standard Time. See below for emergency situations.

o   Should emergency upgrade keys be required, email and contact the Hitachi Vantara Call Centers to escalate the request.

o   An email to should also be sent to receive updated permanent keys.

Fixes and enhancements in version 14.6

Note: When upgrading, remember to remove any avoidances already implemented for defects fixed in intermediate releases (i.e. check for the presence of, and the contents of,startup.scr file for old defects that have since been corrected.)

Version 14.6.7520.02

Issue ID





Theoretically possible compounded NFSv4 write operations are now accepted.



Fixed a stability issue that could occur when modifying a filesystem-audit policy to external.



Fixed a bug that could prevent one or more Fibre Channel ports from connecting to directly attached storage on HNAS 3000/4000.



Legacy Metro and Puma REST API servers have now been removed from the HNAS update packages



Fixed a memory issue that could occur when LDAP tracing is enabled.



Address a rare failure that can occur under heavy Fibre Channel link trauma.



HNAS 5000 series servers didn't always perform a power-cycle when required.



Update for CVE-2022-38023 "Netlogon Elevation of Privilege Vulnerability". Added support for the sealing of secure RPC for NetLogon connections. Server/DC Netlogon connections are now secured using RPC sealing.

Preparation for Domain Controller Windows update is discussed in



Fix an unlikely instability that can be caused by a false positive indication of a deadlock.



Address an issue with diagnostic files, and integration with Hitachi Remote Ops (HRO).



The memory assigned to the SMU webapp has been increased to cope with more classes being loaded in modern versions, especially important when used with HDRS.



The following OS vulnerabilities in the Debian Buster HNAS 5000 series have been patched:

libksba8 (DLA-3153-1) - Fixes an integer overflow flaw which could result in denial of service or the execution of arbitrary code (CVE-2022-3515)

libc-bin, libc-l10n, libc6, libc6-dbg, locales (DLA-3152-1) - Fixes for multiple vulnerabilities (CVE-2016-10228, CVE-2019-19126, CVE-2019-25013, CVE-2020-1752, CVE-2020-6096, CVE-2020-10029, CVE-2020-27618, CVE-2021-3326, CVE-2021-3999, CVE-2021-27645, CVE-2021-33574, CVE-2021-35942, CVE-2022-23218, CVE-2022-23219).

tzdata (DLA-3161-1) - Update includes the changes in tzdata 2022e

libexpat1 (DLA-3165-1) - Resolves a use-after free vulnerability (CVE-2022-43680)

libncurses6, libncursesw6, libtinfo6, ncurses-base, ncurses-bin, ncurses-term (DLA-3167-1) - Resolves an out-of-bounds read and segmentation violation in the terminfo library

distro-info-data (DLA-3171-1) - An update to the distro-info-data database

libxml2 - (DLA-3172-1) - Resolves integer overflow bugs (CVE-2022-40303, CVE-2022-40304)

libpython3.7, libpython3.7-minimal, libpython3.7-stdlib, python3.7, python3.7-minimal (DLA-3175-1) - Resolves a buffer overflow in the SHA-3 hashing function module used by hashlib in Python 3.7, that could potentially result in remote code execution (CVE-2022-37454)

vim, vim-common, vim-runtime, vim-tiny, xxd (DLA-3182-1) - Resolves multiple security vulnerabilities that included buffer overflows, out-of-bounds reads and use-after-free which could lead to a denial-of-service (CVE-2021-3927, CVE-2021-3928, CVE-2021-3974, CVE-2021-3984, CVE-2021-4019, CVE-2021-4069, CVE-2021-4192, CVE-2021-4193, CVE-2022-0213, CVE-2022-0261, CVE-2022-0319, CVE-2022-0351, CVE-2022-0359, CVE-2022-0361, CVE-2022-0368, CVE-2022-0408, CVE-2022-0413, CVE-2022-0417, CVE-2022-0443, CVE-2022-0554, CVE-2022-0572, CVE-2022-0685, CVE-2022-0714, CVE-2022-0729, CVE-2022-0943, CVE-2022-1154, CVE-2022-1616, CVE-2022-1720, CVE-2022-1851, CVE-2022-1898, CVE-2022-1968, CVE-2022-2285, CVE-2022-2304, CVE-2022-2598, CVE-2022-2946, CVE-2022-3099, CVE-2022-3134, CVE-2022-3234, CVE-2022-3324, CVE-2022-3705)

sysstat (DLA-3188-1) - Resolves multiple vulnerabilities in the sysstat package (CVE-2019-16167, CVE-2019-19725, CVE-2022-39377)



SMU could run out of memory if it had to store statistics for many short lived file systems (hundreds) which were created and deleted within a few months. This issue has been fixed.



The following OS vulnerabilities in the CentOS-Stream 8 SMU have been patched:

rsync (CESA-2022:7793)- zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field (CVE-2022-37434)

freetype (CESA-2022:7745)- FreeType: Buffer overflow in sfnt_init_face (CVE-2022-27404)- FreeType: Segmentation violation via FNT_Size_Request (CVE-2022-27405)- Freetype: Segmentation violation via FT_Request_Size (CVE-2022-27406)

fribidi (CESA-2022:7514)- fribidi: Stack based buffer overflow (CVE-2022-25308)- fribidi: Heap-buffer-overflow in fribidi_cap_rtl_to_unicode (CVE-2022-25309)- fribidi: SEGV in fribidi_remove_bidi_marks (CVE-2022-25310)

e2fsprogs (CESA-2022:7720)- e2fsprogs: out-of-bounds read/write via crafted filesystem (CVE-2022-1304)

libxml2 (CESA-2022:7715)- libxml2: Incorrect server side include parsing can lead to XSS (CVE-2016-3709)

libldb (CESA-2022:7730)- samba: AD users can induce a use-after-free in the server process with an LDAP add or modify request (CVE-2022-32746)

grub2 (CESA-2022:2110)- grub2: Incorrect permission in grub.cfg allow unprivileged user to read the file content (CVE-2021-3981)



Made NIS cache properly configurable with cache TTL in NIS config.



Debian DLA-3257-1 : emacs - LTS security update

Reference Information: CVE-2022-45939

Debian DLA-3190-1 : grub2 - LTS security update

Reference Information: CVE-2022-2601, CVE-2022-3775

Debian DLA-3213-1 : krb5 - LTS security update

Reference Information: CVE-2022-42898

Debian DLA-3248-1 : libksba - LTS security update

Reference Information: CVE-2022-47629

Debian DLA-3263-1 : libtasn1-6 - LTS security update

Reference Information: CVE-2021-46848

Debian DLA-3270-1 : net-snmp - LTS security update

Reference Information: CVE-2022-44792, CVE-2022-44793

Debian DLA-3272-1 : sudo - LTS security update

Reference Information: CVE-2023-22809

Debian DLA-3278-1 : tiff - LTS security update

Reference Information: CVE-2022-1354, CVE-2022-1355, CVE-2022-2056, CVE-2022-2057, CVE-2022-2058, CVE-2022-2867, CVE-2022-2868, CVE-2022-2869, CVE-2022-3570, CVE-2022-3597, CVE-2022-3598, CVE-2022-3599, CVE-2022-3626, CVE-2022-3627, CVE-2022-3970, CVE-2022-34526

Debian DLA-3204-1 : vim - LTS security update

Reference Information: CVE-2022-0318, CVE-2022-0392, CVE-2022-0629, CVE-2022-0696, CVE-2022-1619, CVE-2022-1621, CVE-2022-1785, CVE-2022-1897, CVE-2022-1942, CVE-2022-2000, CVE-2022-2129, CVE-2022-3235, CVE-2022-3256, CVE-2022-3352



Global MTU Settings are applied consistently to all cluster nodes when set from the SMU.



Per-interface MTU Settings are applied consistently to all cluster nodes.



Added nis-cache-stats command to provide NIS cache statistics - for both the existing NIS/YP cache and the new NIS/LDAP cache.



When in LDAP mode, HNAS now shouldn't produce YP NIS traffic.



Changes in NIS LDAP configuration setup are now propagated faster to all the EVSs in global security mode.



SSL connections, which are waiting to send or receive data, will now clean up faster if they are requested to disconnect.



Added new console command nis-cache-ttl, which displays or sets the NIS cache TTL.



Debian DLA-3288-1 : curl - LTS security update

Reference Information: 2022-A-0224, 2023-A-0008, 2022-A-0451-S, 2022-A-0350CVE-2022-27774, CVE-2022-27782, CVE-2022-32221, CVE-2022-35252, CVE-2022-43552

Debian DLA-3297-1 : tiff - LTS security update

Reference Information: CVE-2022-48281

Debian DLA-3294-1 : libarchive - LTS security update

Reference Information: CVE-2022-36227

Debian DLA-3313-1 : wireshark - LTS security update

Reference Information: 2023-B-0004CVE-2022-4345, CVE-2023-0411, CVE-2023-0412, CVE-2023-0413, CVE-2023-0415, CVE-2023-0417

Debian DLA-3321-1 : gnutls28 - LTS security update

Reference Information: CVE-2023-0361

Debian DLA-3323-1 : c-ares - LTS security update

Reference Information: CVE-2022-4904

Debian DLA-3325-1 : openssl - LTS security update

Reference Information: CVE-2022-2097, CVE-2022-4304, CVE-2022-4450, CVE-2023-0215, CVE-2023-0286



With TLS enabled, any response from an LDAP server larger than 16K used to cause a ~26 seconds timeout. This is now fixed.



Repeated automatic SMU restarts will no longer eventually lead to the VSP-G/F standby SMU being unavailable and not able to synchronize.



The SMU's OS timezones have been updated to 2022g



Enhnaced the command nis-is-host-in-netgroup to work from cluster nodes



Minor improvements to the decoding of SFP details



Implemented caching for LDAP netgroup queries



Improved the output of the directory-services-config-dump console command to show more information in an error case.



ldap-timeout CLI command now applies the setting globally



Added new Bali command nis-netgroups-lookup-depth, deprecating the --depth and --verbose options of nis-netgroups-nobyhost-config. Improved the output of nis-netgroups-nobyhost-config for clarity.



Include a timestamp with all PSU log entries



Added options --verbose, --key, and --map to control the nis-cache-list command's output.



When a command that is not forwarded is run by ssrun, fails, and its syntax is shown, then ssrun's own syntax is no longer shown as well.

New, modified, and deleted CLI commands

See the NAS man pages for details on the new commands.

New commands

nis-cache-stats :

A supervisor level command to provide NIS cache statistics - for both the existing NIS/YP cache and the new NIS/LDAP cache.

nis-cache-ttl :

A supervisor level command to display or set the NIS cache TTL. It replaces the deprecated "setnis -T" command.

nis-netgroups-lookup-depth :

A supervisor level command to display or set the maximum netgroup lookup depth, for NIS/YP with nobyhost and for NIS/LDAP. It replaces the deprecated "nis-netgroups-nobyhost-config --depth" command option.

Modified commands

apropos :

The command displays the filter commands also in the Matching man pages section.

ldap-timeout :

The command now applies the timeout value globally on a cluster.

man :

The command displays the filter commands in the SEE ALSO section. The options all and --list-page will display the filter commands also.

nis-cache-list :

Added options --verbose, --key, and --map to control the command's output.

nis-is-host-in-netgroup :

With this change, if the LDAP is configured on EVS 1 and on Node 1, and if the command is executed from Node 2, the command returns the expected results and works fine. The man page section 'APPLIES TO' is changed from Cluster node to EVS to reflect this changed behavior.

nis-netgroups-for-host :

This command works only with the YP NIS and works on every EVS and every pnode as is. Replaced the ConsoleManagement::NoNeeds with ConsoleManagement::VnodeLocal only to make it consistent with the command 'nis-is-host-in-netgroup' which shares the same man page. The man page section 'APPLIES TO' is changed from Cluster node to EVS.

nis-netgroups-nobyhost-config :

Deprecated the "--depth" and "--verbose" options, and improved output for clarity.

Deleted commands


Copyrights and licenses

2023 Hitachi Vantara LLC. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Vantara LLC (collectively "Hitachi"). Licensee may make copies of the Materials provided that any such copy is: (i) created as an essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other copies of the Materials. "Materials" mean text, data, photographs, graphics, audio, video and documents.

Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials contain the most current information available at the time of publication.

Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for information about feature and product availability, or contact Hitachi Vantara LLC at

Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of Hitachi products is governed by the terms of your agreements with Hitachi Vantara LLC.

By using this software, you agree that you are responsible for:

1)    Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other individuals; and

2)    Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.

Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or import the Document and any Compliant Products.

Hitachi and Lumada are trademarks or registered trademarks of Hitachi, Ltd., in the United States and other countries.

AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, GDPS, HyperSwap, IBM, Lotus, MVS, OS/390, PowerHA, PowerPC, RS/6000, S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z14, z/VM, and z/VSE are registered trademarks or trademarks of International Business Machines Corporation.

Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, Microsoft Edge, the Microsoft corporate logo, the Microsoft Edge logo, MS-DOS, Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo, Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.

All other trademarks, service marks, and company names in this document or website are properties of their respective owners.

Copyright and license information for third-party and open source software used in Hitachi Vantara products can be found in the product documentation, at or


  • Was this article helpful?