Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Hitachi Dynamic Link Manager (for Solaris) 8.7.6-00 Release Notes

About this document

This document (RN-00HS273-57, December 2020) provides late-breaking information about Hitachi Dynamic Link Manager (for Solaris) 8.7.6-00. It includes information that was not available at the time the technical documentation for this product was published, as well as a list of known problems and solutions.

Intended audience

This document is intended for customers and Hitachi Vantara partners who license and use Hitachi Dynamic Link Manager (for Solaris) .

Accessing product downloads

Product software, drivers, and firmware downloads are available on Hitachi Vantara Support Connect:  https://support.hitachivantara.com/.

Log in and select Product Downloads to access the most current downloads, including important updates that may have been made after the release of the product.

About this release

This release is a major release that adds new features.

Product package contents

Medium

CD-ROM

Revision

Release Type

Software

Hitachi Dynamic Link Manager (for Solaris)

8.7.6-00

Full Package

New features and important enhancements

8.7.6-00 Additional Functions and Modifications

- The deletion of paths in the Offline(C) status by the delete -path command is now supported.

- Configuration changes in a SAN boot environment are now supported.

- Hitachi Virtual Storage Platform E590 and E790 are now supported.

System requirements

Refer to Chapter 3. Creating an HDLM Environment of the Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris).

Host

For details on supported hosts, refer to the following manual:

- Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) Chapter 3. Creating an HDLM Environment - HDLM System Requirements - Hosts and OSs Supported by HDLM

Host Bus Adapter (HBA)

For information on supported HBAs and drivers, refer to Appendix A - Host Bus Adapter (HBA) Support Matrix.

Storage

For details on supported storage systems, refer to the following manual:

- Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) Chapter 3. Creating an HDLM Environment - HDLM System Requirements - Storage Systems Supported by HDLM

When the Dynamic I/O Path Control function is enabled on Hitachi AMS 2000 series, use a microprogram version 08B8/D or later.

Requirements to use a HAM environment are as follows:

- HDLM supports the HAM functionality of the following storage system:

- Hitachi Universal Storage Platform V/VM

- Hitachi Virtual Storage Platform

- HPE XP24000/XP20000

- HPE P9500

- Hitachi Unified Storage VM

The required microprogram versions are listed below:

Storage system

Interface

Microprogram version

Remark

Universal Storage Platform V/VM

FC I/F

60-06-10-XX/XX or later

X: voluntary number

Virtual Storage Platform

FC I/F

70-01-42-XX/XX or later (*1)

X: voluntary number

XP24000/XP20000

FC I/F

60-06-10-XX/XX or later

X: voluntary number

P9500

FC I/F

70-01-42-XX/XX or later (*1)

X: voluntary number

Hitachi Unified Storage VM

FC I/F

73-03-0X-XX/XX or later

X: voluntary number

*1: If you use the HAM functionality with USP V or XP24000, apply 70-03-00-XX/XX or later.

- When using HAM in a Solaris environment, set up a Host Mode Option 48. For details, see "Preventing Unnecessary Failover" in High Availability Manager User's Guide.

Operating system requirements

For details on supported operating system, refer to the following manual:

- Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) Chapter 3. Creating an HDLM Environment - HDLM System Requirements - Hosts and OSs Supported by HDLM

- When using HAM in a Solaris environment, HDLM supports only Solaris 10.

The versions of JDK listed below are now supported.

To link with Global Link Manager, make sure that one of the following JDK Solaris packages is already installed on the host.

-JDK 8.0 (64-bit edition)

Prerequisite programs

None.

Related Programs

For details on related programs, refer to the following manual:

- Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) Chapter 3. Creating an HDLM Environment - HDLM System Requirements - Cluster Software Supported by HDLM, Volume Manager Supported by HDLM, and Combinations of Cluster Software and Volume Managers Supported by HDLM

The following tables list the number of LUs and number of paths supported by HDLM, and the supported configuration.

This table lists the supported number of LUs and number of paths in a configuration where cluster software and virtualization software are not used:

OS

Number of LUs

Total number of paths

Supported configuration

Solaris10

4096LUs

8192paths

Boot disk environment

Solaris11

This table lists the number of LUs supported and number of paths supported in a configuration where cluster software and virtualization software are used:

OS

Number of LUs

Total number of paths

Supported configuration

Solaris10

4096LUs

8192paths

- Configurations using VCS cluster software

- Configurations using Oracle VM Server for SPARC#1

256LUs

4096paths

- Configurations using cluster software other than VCS

- Configurations using virtualization software other than Oracle VM Server for SPARC

Solaris11

256LUs

4096paths

- Configurations using cluster software

- Configurations using virtualization software

#1: The system limits the number of LUs that can be exported from control domains to guest domains.

Memory and disk space requirements

For details on memory and disk capacity requirements, refer to the following manual:

- Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) Chapter 3. Creating an HDLM Environment - HDLM System Requirements - Memory and disk capacity requirements

HDLM Supported Configurations

For details on the condition that HDLM can manage capacity requirements, refer to the following manual:

- Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) Chapter 3. Creating an HDLM Environment - HDLM System Requirements - The Number of Paths Supported in HDLM

Resolved problems

None.

Known problems

·       During a license update, if there is an error in the already installed license information, the messages below (which indicate a problem with the license key file) might be displayed even when you are using a correct license key file. If these messages are displayed and there is no problem in the license key file being used, execute the utility for collecting HDLM error information (DLMgetras) to acquire error information, and contact your HDLM vendor or the maintenance company if there is a maintenance contract for HDLM.

KAPL09113-E There is no installable license key in the license key file. File name = /var/tmp/hdlm_license

KAPL01082-E There is no installable license key in the license key file. File name = /var/tmp/hdlm_license

·       About operation when all paths are disconnected during intermittent error monitoring:

When I/Os are performed continuously for an LU whose paths are all Offline(E), Online(E), or Offline(C) (because, for example, all paths have been disconnected), the number of times that an error occurs (the IEP value when "dlnkmgr view -path -iem" is executed) during intermittent error monitoring might increase even though the automatic failback function did not recover all paths. In such a case, even though an intermittent error did not occur, HDLM often assumes an intermittent error, and excludes paths from the automatic failback function. In such a case, after recovery from the failure, to change the status of a path excluded from automatic failback to online, manually change the status to online.

·       When installing HDLM to the Solaris server, the installation is terminated and the following messages are output if a user named "install" is defined in the /etc/passwd file. When installing HDLM to the Solaris server, make sure that there is no user named "install" defined in the /etc/passwd file.

When performing installation of HDLM, the following messages are output:

o   When Solaris 8 is used and EZ Fibre 2.2.2 is installed:

showrev: get_env_var(IS8e8546a, SUNW_PATCHID)

:

KAPL09133-E The following patch(es) required for HDLM has not been applied:

o   When Solaris 8 is used and EZ Fibre 2.2.2 is not installed, or Solaris 9 or Solaris 10 is used:

mkdir: Failed to make directory "/var/opt/DynamicLinkManager"; Permission denied

mkdir: Failed to make directory "/var/opt/DynamicLinkManager/log"; No such file or directory

KAPL09091-E A fatal error occurred in HDLM. The system environment is invalid.

There are some notes as follows on an SVM shared diskset function in the configuration where HBA driver other than that of Oracle (other than qlc or emlxs driver) is used in Solaris 10 environment:

o   When Solaris Cluster is used:

If an HDLM management-target device is used in SVM shared diskset function, use Solaris Cluster device ID (the logical device file under /dev/did/dsk). The HDLM logical device file name cannot be used in SVM shared diskset function.

o   When Solaris Cluster is not used:

An HDLM management-target device cannot be used in SVM shared diskset function.

·       If I/O Fencing function is used and any of the following operations is performed, the following pattern messages may be output to a console and syslog. Ignore these messages:

o   Online VCS disk group resource, or import a disk group of VxVM.

o   Execute vxfentsthdw command without specifying -r option.

o   Issue I/O after removing a registration key or a reservation key from a disk by vxfenadm command.

scsi: [ID 107833 kern.warning] WARNING: /pci@1f,2000/SUNW,emlxs@1/fp@0,0/ssd@w50060e8005271760,6 (ssd40):

Error for Command: read(10)                Error Level: Retryable

scsi: [ID 107833 kern.notice]     Requested Block: 304                       Error Block: 304

scsi: [ID 107833 kern.notice]     Vendor: HITACHI                            Serial Number: 50 02717006B

scsi: [ID 107833 kern.notice]     Sense Key: Unit Attention

scsi: [ID 107833 kern.notice]     ASC: 0x2a (registrations preempted), ASCQ: 0x5, FRU: 0x0

·       Notes for executing DLMgetras utility

If you specify a directory under an NFS mount point as an output destination and then execute DLMgetras utility, an empty directory named "DLMgetras_tmpdir.xxxx/the_specified_directory_name" may be created for the output destination directory ("xxxx" is an optional numeric value).

When the empty directory exists after executing DLMgetras utility, delete the directory.

The dynamic LU deletion function cannot be used in a configuration that uses Solaris Cluster.

·       Notes on environments in which SCSI-2 Reserve is issued:

In an environment in which SCSI-2 Reserve is issued, if the path status is changed and owner and non-owner paths are switched, an I/O is issued to a non-owner path even though the status of the owner path is Online. By performing Offline processing, an I/O for an owner path can be issued to a non-owner path.

·       When a volume of VSP G1500, VSP F1500, VSP F400, F600, or F800 is virtualized as a volume of a different storage system model, an incorrect model ID might be displayed for the physical volume that corresponds to the virtual volume.

- For VSP G1500, "VSP_G1000" might be displayed instead of "VSP_G1500".

- For VSP F1500, "VSP_G1000" might be displayed instead of "VSP_F1500".

- For VSP F400, F600, or F800, "VSP_Gx00" might be displayed instead of "VSP_Fx00".

The model ID of a physical volume is displayed by the following operations:

Function

Operation

Displayed item

HDLM Command

(dlnkmgr)

dlnkmgr view -path -item phys -vstv

Physical-LDEV

dlnkmgr view -lu   -item phys   -vstv

dlnkmgr view -path -stname    -pstv

DskName

dlnkmgr view -lu                       -pstv

Product

HGLM

Displaying the Paths tabbed-page in the host-name subwindow

Storage system name of

Physical Information

Displaying the Multipath LUs tabbed-page in the host-name subwindow

Displaying the Storage systems subwindow (physical storage information)

Name

Installation precautions

For details on HDLM installation, refer to the following:

- "Installing HDLM" in "Chapter 3. Creating an HDLM Environment" in the manual Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

Usage precautions

For details on usage precautions when using HDLM, refer to the following:

- "Notes on Creating an HDLM Environment" in "Chapter 3. Creating an HDLM Environment" in the manual Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

- "Notes on Using the Hitachi Network Objectplaza Trace Library" in "Setting up Integrated Traces" in "Chapter 3. Creating an HDLM Environment" in the manual Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

- "Notes on Using HDLM" in "Chapter 4. HDLM Operation" in the manual Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

- "Notes on Using Commands" in "HDLM Operations Using Commands" in "Chapter 4. HDLM Operation" in the manual Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

- "Precautions Regarding Changes to the Configuration of an HDLM Operating Environment" in "Changing the Configuration of the HDLM Operating Environment" in "Chapter 4. HDLM Operation" in the manual Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

Additional Usage Precautions

If you use Oracle RAC 12c, specify the following settings so that the sum of the HBA timeout values is 70 or less. After setting the parameter, restart the host.

- Add the following line to the /kernel/drv/fp.conf file:

    fp_offline_ticker=<timeout-value-of-the-fp-driver>;

    Example of the setting:

        fp_offline_ticker=50;

- Add the following line to the /kernel/drv/fcp.conf file:

    fcp_offline_delay=<timeout-value-of-the-fcp-driver>;

    Example of the setting:

        fcp_offline_delay=20;

Verified Boot, supported by Solaris 11.2, is not supported. If you enable Verified Boot, when loading the HDLM driver, the system will output a Warning message or the HDLM driver will fail to load. Do not enable Verified Boot.

Configurations that use boot pools supported by Solaris 11.3 are not supported in boot disk environments where HDLM devices are used.

Version numbers are displayed as follows after this version of HDLM is installed.


Function

Item

Version number

HDLM command

(dlnkmgr)

HDLM Version

8.7.6-00

HDLM Manager

8.7.6-00

HDLM Alert Driver

8.7.6-00

HDLM Driver

8.7.6-00

"pkginfo -l" command

(Solaris  10 or earlier)

HDLM Version

08.7.6.0000

"pkg info" command

(Solaris 11)

HDLM Version

8.7.6.0

The following example shows the text displayed when dlnkmgr view -sys is executed.

# /opt/DynamicLinkManager/bin/dlnkmgr view -sys

HDLM Version                 : 8.7.6-00

Service Pack Version         :

Load Balance                 : on(extended lio)

Support Cluster              :

Elog Level                   : 3

Elog File Size (KB)          : 9900

Number Of Elog Files         : 2

Trace Level                  : 0

Trace File Size (KB)         : 1000

Number Of Trace Files        : 4

Path Health Checking         : on(30)

Auto Failback                : off

Intermittent Error Monitor   : off

Dynamic I/O Path Control     : off(10)

HDLM Manager Ver        WakeupTime

Alive        8.7.6-00   2020/07/21 15:53:29

HDLM Alert Driver Ver        WakeupTime          ElogMem Size

Alive             8.7.6-00   2020/07/21 15:53:24 4096

HDLM Driver Ver        WakeupTime

Alive        8.7.6-00   2020/07/21 15:53:24

License Type Expiration

Permanent    -

KAPL01001-I The HDLM command completed normally. Operation name = view, completion time = 2020/07/21 15:54:12

And the following example shows the displayed text when pkginfo command is executed for Solaris 10 or earlier.

# pkginfo -l

   PKGINST:  DLManager

      NAME:  Dynamic Link Manager

  CATEGORY:  system

      ARCH:  sparc

   VERSION:  08.7.6.0000

   BASEDIR:  /

    VENDOR:

... ...

The following example shows the displayed text when pkg info command is executed for Solaris 11.

# pkg info DLManager

       Name: DLManager

       Summary: Dynamic Link Manager

       State: Installed

       Publisher: Hitachi

       Version: 8.7.6.0

       Build Release: 5.11

       Branch: 0

Packaging Date: Tue Jul 21 07:20:24 2020

       Size: 23.24 MB

       FMRI: pkg://Hitachi/DLManager@8.7.6.0,5.11...200721T072024Z

The default value of load balancing algorithm

- In HDLM 8.7.6-00, the load balancing function is on and algorithm is Extended Least I/Os.

If an upgrade installation of HDLM is not performed during the upgrade of Solaris 11.3 to Solaris 11.4, when the OS starts, the HDLM device will not be correctly configured. In such cases, the following pattern messages are output to a console and syslog, indicating that a problem has occurred.

/kernel/drv/sparcv9/dlmfdrv: use of symbol '_depends_on[]' is deprecated: "misc/scsi"

devfsadm: dlopen failed: /usr/lib/devfsadm/linkmod/HIT_hdlm_link.so: ld.so.1: devfsadm: /usr/lib/devfsadm/linkmod/HIT_hdlm_link.so: wrong ELF class: ELFCLASS32

When this problem occurs, the HDLM device might have been configured with an incorrect name, in which case the HDLM device name output by the dlnkmgr view command differs from the HDLM device name output by the format command. Even if the HDLM device names output by the dlnkmgr view command and the format command are the same, this does not indicate that the HDLM device is correctly configured.

To resolve this problem, reboot the OS in the boot environment of Solaris 11.3, and then refer to Performing an upgrade installation of HDLM when upgrading from Solaris 11.3 to Solaris 11.4 in the Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) (3021-9-083-F0).

Because you will need to perform the upgrade of Solaris again, be sure to first perform the procedure described in Note in (2) Upgrade installation of HDLM in the User Guide.

If a boot disk is created in an environment where an HDLM physical device is specified, an attempt to perform an installation or upgrade installation of OS packages, or to activate the boot environment (BE) will fail.

Migrate the boot device from a physical device to a logical device.

For details, see "Migrating from an environment where a physical device is specified, to an environment where a logical device is specified" in the Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris) (MK-92DLM114-49), or the SD-EN-HDLM-234 documentation.

If an upgrade installation of HDLM 8.6.5 or later is performed in a SAN boot environment, and then the SAN boot environment is configured in a problematic configuration, the following messages might be output to the syslog. For details on the messages, see "Messages" in the User Guide.

KAPL13296-E The boot disk environment was configured on a physical device. Referto the HDLM User's Guide and migrate the boot disk environment to a logical device.

KAPL13297-E The boot disk environment was not configured according to the correct procedures. Refer to the HDLM User's Guide and reconfigure the boot disk environment.

KAPL13298-E The boot disk is not managed by HDLM. To use the boot disk as a SCSI device (single path) as is,remove the boot disk from the HDLM management targets. To configure the boot disk on an HDLM device (multipath), refer to the HDLM User's Guide and reconfigure the boot diskenvironment.

In a SAN boot environment, the configuration of paths cannot be changed (paths cannot be added or deleted) for the boot disk.

 Execution time of the dlmcfgmgr utility

 The execution time of the dlmcfgmgr utility depends on the number of LUs and paths that

are already configured.

This table lists the approximate execution time using the following environment as an example:

Number of LUs that are already configured

Execution time of dlmcfgmgr (*1)

512 LUs

5 minutes

1024 LUs

10 minutes

2048 LUs

20 minutes

4096 LUs

40 minutes

*1: The execution time differs depending on the performance and load of the server.

This table lists the execution time to be measured in the following environment:

Item

Details

Server name

SPARC T5-2

CPU

16-core 3.6GHz SPARC T5 processor (2 CPU)

Memory

256 GB

The effect of uninstalling JDK

- If the KAPL09142-E message is output with the ErrorCode=31,2 during uninstallation or re-installation of HDLM, perform the following operations.

 Then, if the result of the ls command is "No such file or directory", install JDK, and then uninstall or re-install HDLM.

#cat /opt/HDVM/HBaseAgent/agent/config/server.properties  | grep JRE

server.agent.JRE.location=<JDK-installation-destination-directory>

#ls -l <JDK-installation-destination-directory>

 <JDK-installation-destination-directory>: No such file or directory If the result of the ls command is "No such file or directory", JDK is not installed.

Notes on HAM environments

- HAM does not support cluster software.

- In the case of displaying the LU information, the HAM information is not output by specifying the "all" parameter-value for the HDLM command. Specify the "ha" and "hastat" parameter-value instead of it.

- An online operation is performed on an owner path, a non-owner path's status may change to Offline(E). After performing an online operation on an owner path, use the HDLM command to make sure that the non-owner path's status is Online. If the non-owner path's status is Offline(E), change the status of HAM pairs to PAIR, and then perform an online operation on the Offline(E) path again.

- When you set up a HAM pair to be managed by HDLM, make sure that the host recognizes paths to the MCU (Primary VOL) and RCU (Secondary VOL) after the HAM pair is created.

Execute the dlnkmgr view -lu -item hastat operation. If ha is not displayed in the HaStat column, then the corresponding LU is not recognized as being in a HAM configuration.

If the host recognizes the paths to the MCU and RCU before the HAM pair is created, restart the host after the HAM pair is created. Execute the dlmsetconf utility after the HAM pair is created, and then restart the host with the reconfiguration option specified.

- If you release a HAM pair to recover the system after a HAM volume failure, do not restart a host that is connected to the MCU and RCU while the HAM pair is released.

If you need to restart the host while the HAM pair is released, disconnect all paths to the MCU and RCU, restart the host, re-create the HAM pair, and then reconnect the paths.

If you restart a host that is connected to the MCU and RCU while the HAM pair is released, the RCU volume will be recognized as a volume other than an MCU volume. If this occurs, restart the host after the HAM pair is re-created.

Execute the dlnkmgr view -lu -item hastat operation, and then confirm that ha is displayed in the HaStat column.

- When HDLM installed and operated, the server must have 2GiB or more physical memory.

- When a HAM environment, if HDLM is configured, a HAM pair is released, and then the system is restarted, the path status of the S-VOL will change to Offline(E).

If you want to continue using the LUs that made up the HAM pair, reconfigure the HAM pair, and then execute the online command to change the S-VOL status to Online.

If you do not want to continue using the LUs that made up the HAM pair, execute the dlmsetconf command, and then restart the affected host.

- Follow the Installing Software section in the High Availability Manager User's Guide to install HDLM. For this procedure, use the HDLM User's Guide up to the section Make sure that the logical device file of the sd or ssd device in backed up. Also, make sure that the host OS (Solaris) can recognize the HAM pair before executing the dlmsetconf utility (explained in the following section):

After the host OS recognizes the HAM pair, follow the section that starts with executing the dlmsetconf utility.

-  If all of the following conditions are met and the dlnkmgr online -hapath command is executed, a path status will change to Online(S), instead of Online:

- The status of the HAM P-VOL is PSUS.

- The status of the HAM S-VOL is SSWS.

- The path statuses are Online(S), and a physical failure is recovered from.

-  If you execute the -zpool import command to collect information about disks that can be imported into a ZFS file system, the secondary volume (S-VOL) in the HAM environment might enter the Offline(E) or the Online(E) status. In addition, if you mistakenly use a command such as the dd command or the mount command to assign a slice that has no allocated area, the secondary volume (S-VOL) in the HAM environment might enter the Offline(E) or the Online(E) status. If either of the above problems occurs, execute the dlnkmgr online command to restore the path status to Online. If the primary volume (P-VOL) is suspended, I/O is processed even if the path is not restored to the Online status. However, if you continue operation in such conditions, the system cannot operate as a multipath environment.

Documentation

Available documents

Document name

Document number

Issue date

Hitachi Command Suite Dynamic Link Manager User Guide (for Solaris)

MK-92DLM114-49

Oct, 2020

Documentation errata

Location to be corrected

Corrections

Creating an HDLM environment

HDLM system requirements

Storage systems supported by HDLM

Storage systems

Before

The following shows the storage systems that HDLM supports.

- Hitachi Virtual Storage Platform

- HPE StorageWorks P9500 Disk Array

- Hitachi Virtual Storage Platform 5100

- Hitachi Virtual Storage Platform 5500

- Hitachi Virtual Storage Platform 5100H

- Hitachi Virtual Storage Platform 5500H

- Hitachi Virtual Storage Platform G1000

- HPE XP8 Storage

- HPE XP7 Storage

- Hitachi Virtual Storage Platform G1500

- Hitachi Virtual Storage Platform F1500

- Hitachi Virtual Storage Platform E990

- Hitachi Virtual Storage Platform G200

- Hitachi Virtual Storage Platform G350

- Hitachi Virtual Storage Platform G370

- Hitachi Virtual Storage Platform G400

- Hitachi Virtual Storage Platform G600

- Hitachi Virtual Storage Platform G700

- Hitachi Virtual Storage Platform G800

- Hitachi Virtual Storage Platform G900

- Hitachi Virtual Storage Platform F350

- Hitachi Virtual Storage Platform F370

- Hitachi Virtual Storage Platform F400

- Hitachi Virtual Storage Platform F600

- Hitachi Virtual Storage Platform F700

- Hitachi Virtual Storage Platform F800

- Hitachi Virtual Storage Platform F900

- Hitachi Virtual Storage Platform N400

- Hitachi Virtual Storage Platform N600

- Hitachi Virtual Storage Platform N800

- HUS100 series

- HUS VM

after

The following shows the storage systems that HDLM supports.

- Hitachi Virtual Storage Platform

- HPE StorageWorks P9500 Disk Array

- Hitachi Virtual Storage Platform 5100

- Hitachi Virtual Storage Platform 5500

- Hitachi Virtual Storage Platform 5100H

- Hitachi Virtual Storage Platform 5500H

- Hitachi Virtual Storage Platform G1000

- HPE XP8 Storage

- HPE XP7 Storage

- Hitachi Virtual Storage Platform G1500

- Hitachi Virtual Storage Platform F1500

- Hitachi Virtual Storage Platform E590

- Hitachi Virtual Storage Platform E790

- Hitachi Virtual Storage Platform E990

- Hitachi Virtual Storage Platform G200

- Hitachi Virtual Storage Platform G350

- Hitachi Virtual Storage Platform G370

- Hitachi Virtual Storage Platform G400

- Hitachi Virtual Storage Platform G600

- Hitachi Virtual Storage Platform G700

- Hitachi Virtual Storage Platform G800

- Hitachi Virtual Storage Platform G900

- Hitachi Virtual Storage Platform F350

- Hitachi Virtual Storage Platform F370

- Hitachi Virtual Storage Platform F400

- Hitachi Virtual Storage Platform F600

- Hitachi Virtual Storage Platform F700

- Hitachi Virtual Storage Platform F800

- Hitachi Virtual Storage Platform F900

- Hitachi Virtual Storage Platform N400

- Hitachi Virtual Storage Platform N600

- Hitachi Virtual Storage Platform N800

- HUS100 series

- HUS VM

Appendix A

HBA Driver Support Matrix

Use the HBA drivers listed below. When HDLM manages the path of a boot disk, use HBA driver indicated by [bootable].

Note the following points in constitution or setting of HBA.

- When using two or more HBA adapters in one server, use the same type of HBA adapter.

- When using a cluster system or an SDS (SVM) shared diskset function, use the same type of adapter in all the nodes. If you combine different types of HBA, HDLM may not be able to switch a path when an error occurs and a failover of operating program may not be able to be performed between nodes.

- Before installation of HDLM, you must set the binding between the target ID and storage port in HBA where such settings are possible (e.g. TID-WWPN, TID-WWNN, etc.). This is to prevent HDLM from incorrectly detecting a target ID value of an sd or ssd device, for the target ID value change when booting a server or host. In HBA documentation, this is called the "Binding" or "Persistent Binding" feature.

- When HDLM manages the path of a boot disk, refer to the following documents for how to acquire the name of a boot device that is specified in the setting of HBA and boot command.

- When using HBA of Oracle:

Refer to the manual "Hitachi Dynamic Link Manager User's Guide for Solaris ™ Systems Chapter 3. Creating an HDLM Environment - Configuring a Boot Disk Environment".

- When using HBA other than that of Oracle:

Refer to the manual of used HBA.

- When the constitution change related to HBA is performed, the constitution change of HDLM may be required. For details, refer to the manual "Hitachi Dynamic Link Manager User's Guide for Solaris TM Systems Chapter4. HDLM Operation - Changing the configuration of the HDLM operating environment".

Vendor (Driver)

Applicable OS and HBA driver

Solaris 10

Solaris 11

Oracle (FC/IF) (*1)

Solaris attachment driver [bootable] (*4)(*7)

Solaris attachment driver [bootable] (*4)

Oracle (FCoE IF) (*1)

Solaris attachment driver [bootable] (*4)(*7)(*8)

-

Emulex (FC I/F)  (*2)

6.02f

6.02h [bootable]

6.11c [bootable]

6.11cx2 [bootable]

6.21g [bootable]

-

QLogic (FC I/F)

5.03 [bootable] (*3)

5.04 [bootable] (*3)

-

Fujitsu (FC I/F)

3.0 Update1

4.0 [bootable] (*6)

4.0 Update1 [bootable] (*6)

4.0 Update2 [bootable] (*6)

-

Brocade (FC I/F)

bfa 1.1.0.4  (*1) (*5)

bfa 2.1.0.1  (*1) (*5)

-

Brocade (FCoE IF)

bfa 2.3.0.6(*1)(*5)

-

Note:

*1: If the server is started with a disconnected path, and then the path is connected and recovered, execute "cfgadm -c configure" command before entering the "dlnkmgr online" command in order for Solaris to recognize the storage. In a Solaris 10 environment, even when "cfgadm -c configure" command is executed, there are cases when the host cannot recognize the storage. If this happens, after the path is recovered, reboot the host so that it recognizes the storage.

*2: Edit and set the "/kernel/drv/lpfc.conf" file as follows:

- no-device-delay=0

- nodev-holdio=0

- nodev-tmo: Set the default value (30) or more.

- When connecting to storages either directly or via an FC HUB (Loop mode only): topology=4

- When connecting to storages via an FC Switching HUB (point-to-point mode only): topology=2

Use an optional value for the other parameters.

*3: Edit and set the "/kernel/drv/qla2200.conf" file or the "/kernel/drv/qla2300.conf" file as follows:

- hbaX-link-down-error=1

- hbaX-fast-error-reporting=1 (Set only for HBA driver version supported this parameter)

"X" is the instance number of the HBA driver.

*4: HBA driver is bundled in Solaris installation media.

*5: Apply the following patches:

119130-33 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-09 or later, SunOS 5.10: fp plug-in for cfgadm

120346-09 or later, SunOS 5.10: Common Fibre Channel HBA API and Host Bus Adapter Libraries

*6: Edit and set the "/kernel/drv/fjpfca.conf" file as follows:

- failover_function=1

*7: Apply the following patches:

HBA models

Applicable patches

The latest revisions of successor patches are recommended.

following Sun HBAs:

-  X6727A, X6748A, X6757A, X6799A,

SG-XPCI1FC-QF2<X6767A>,

SG-XPCI2FC-QF2<X6768A>,

SG-XPCI2FC-QF2-Z, SG-XPCI1FC-QL2,

SG-XPCI1FC-QF4, SG-XPCI2FC-QF4,

SG-XPCIE1FC-QF4,  SG-XPCIE2FC-QF4

following QLogic HBAs

- QLA2300F, QLA2310F, QLA2332,

QLA2340, QLA2342, QLA2344,

QLA2460, QLA2462, QLE2460,

QLE2462, QLE2464, QCP2332,

QCP2330, QCP2340, QCP2342

119130-22 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-04 or later, SunOS 5.10: fp plug-in for cfgadm

120182-02 or later, SunOS 5.10: Sun Fibre Channel Host Bus Adapter Library

120346-04 or later, SunOS 5.10: Common Fibre Channel HBA API Library

If patch 119130-22 or later is not applied, the following problems may occur:

- I/O process stops without a failover of a path, when a path error occurs.

- The problem that is indicated in Sun Alert ID 102130.

following Sun HBAs:

- SG-XPCI1FC-EM2, SG-XPCI2FC-EM2,

SG-XPCI1FC-EM4-Z, SG-XPCI2FC-EM4-Z,

SG-XPCIE1FC-EM4, SG-XPCIE2FC-EM4

following Emulex HBAs

- LP9002, LP9802, LP10000, LP10000DC,

LP11000, LP11002, LPe11000, LPe11002

119130-22 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-04 or later, SunOS 5.10: fp plug-in for cfgadm

120182-02 or later, SunOS 5.10: Sun Fibre Channel Host Bus Adapter Library

120222-11 or later, SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver

120346-04 or later, SunOS 5.10: Common Fibre Channel HBA API Library

If patch 119130-22 or later is not applied, the following problems may occur:

- I/O process stops without a failover of a path, when a path error occurs.

- The problem that is indicated in Sun Alert ID 102130.

following Sun HBAs:

-  SG-XPCIE1FC-QF8-Z, SG-XPCIE2FC-QF8-Z,

SG-XPCIE2FC-QB4-Z

following QLogic HBAs

-  QLE2560, QLE2562, QEM2462                    

119130-33 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-09 or later, SunOS 5.10: fp plug-in for cfgadm

120346-09 or later, SunOS 5.10: Common Fibre Channel HBA API and Host Bus Adapter Libraries

125166-10 or later, SunOS 5.10: Qlogic ISP Fibre Channel Device Driver

following Sun HBAs:

- SG-XPCIE1FC-EM8-Z, SG-XPCIE2FC-EM8-Z,

SG-XPCIE2FC-EB4-Z

following Emulex HBAs

- LPe12000, LPe12002             

119130-33 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-09 or later, SunOS 5.10: fp plug-in for cfgadm

120222-27 or later, SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver

120346-09 or later, SunOS 5.10: Common Fibre Channel HBA API and Host Bus Adapter Libraries

following Sun HBAs:

- SG-XPCIE2FCGBE-Q-Z

119130-33 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-09 or later, SunOS 5.10: fp plug-in for cfgadm

120346-09 or later, SunOS 5.10: Common Fibre Channel HBA API and Host Bus Adapter Libraries

125166-12 or later, SunOS 5.10: Qlogic ISP Fibre Channel Device Driver

following Sun HBAs:

- SG-XPCIE2FCGBE-E-Z

119130-33 or later, SunOS 5.10: Sun Fibre Channel Device Drivers

119974-09 or later, SunOS 5.10: fp plug-in for cfgadm

120222-29 or later, SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver

120346-09 or later, SunOS 5.10: Common Fibre Channel HBA API and Host Bus Adapter Libraries

following Emulex CNAs:

- LP21000

- LP21002

- OCe10102-F

- OCe11102

145096-03 (or later) SunOS 5.10: oce driver patch

145098-04 (or later) SunOS 5.10: emlxs driver patch

following Qlogic CNAs:

- QLE8140

- QLE8142

143957-05 (or later) SunOS 5.10: qlc patch

*8: Boot disk environment configured with Emulex-CNAs is not supported.

Copyrights and licenses

© 2020 Hitachi, Ltd. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Vantara LLC (collectively "Hitachi"). Licensee may make copies of the Materials provided that any such copy is: (i) created as an essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other copies of the Materials. "Materials" mean text, data, photographs, graphics, audio, video and documents.

Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials contain the most current information available at the time of publication.

Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for information about feature and product availability, or contact Hitachi Vantara LLC at https://support.hitachivantara.com/e...ontact-us.html.

Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of Hitachi products is governed by the terms of your agreements with Hitachi Vantara LLC.

By using this software, you agree that you are responsible for:

1)     Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other individuals; and

2)     Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.

Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or import the Document and any Compliant Products.

Hitachi and Lumada are trademarks or registered trademarks of Hitachi, Ltd., in the United States and other countries.

AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000, S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business Machines Corporation.

Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS, Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo, Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.

All other trademarks, service marks, and company names in this document or website are properties of their respective owners.

Copyright and license information for third-party and open source software used in Hitachi Vantara products can be found at https://www.hitachivantara.com/en-us...any/legal.html.