Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Hitachi Data Instance Director v6.7.8 Release Notes

 About this document

This document (RN-93HDID018-18, May 2019) provides late-breaking information about Hitachi Data Instance Director (HDID) Version 6.7.8. It includes information that was not available at the time the technical documentation for this product was published, as well as a list of known problems and solutions.

Intended audience

This document is intended for customers and Hitachi Vantara partners who license and use Hitachi Data Instance Director.

Accessing product downloads

Product software, drivers, and firmware downloads are available on Hitachi Vantara Support Connect:  https://support.hitachivantara.com/.

Log in and select Product Downloads to access the most current downloads, including important updates that may have been made after the release of the product.

About this release

This release is a maintenance release that resolves known problems, all installers have been incremented to the 6.7.8 release.

Upgrading the software

IMPORTANT: Prior to upgrade please refer to the upgrade section of the user guide for guidance.

IMPORTANT: Due to a change in the certificate used to sign Windows binaries it is not possible to push upgrade HDID on a Windows machine until that machine has been upgraded to HDID version 6.7.7. All Windows machines will need to be manually upgraded to HDID 6.7.7 or later. Subsequent upgrades from 6.7.7 (or later) to any later version will support push upgrades to all supported operating systems.

Some features that were in version 5.5.x are currently unavailable in version 6.7.8. It is important to check all features being used are available prior to upgrade. Again please refer to the user guide for details.

Only versions 6.0.x and later can be upgraded to version 6.7.8. If running an earlier version, please upgrade to version 6.0.0 or later prior to upgrading to 6.7.8.

Please note that all HDID nodes must be upgraded to at least 6.7.0, however it is possible to have a mix of 6.7.x and 6.7.8 nodes, however ideally all nodes should be on the same version. For this release the Master node and ALL ISM nodes should be upgraded.

Before upgrading unmount any mounted snapshots.

DO NOT upgrade while replications are in the process of pairing. Any active replications must be in the paired state before upgrade is carried out.

After you complete the upgrade installation on the master, upgrade all other nodes.

For full details on upgrade paths refer to the upgrade path matrix by clicking on “View Full Specifications” here: https://www.hitachivantara.com/en-us/products/data-protection/data-instance-director.html#tech-specifications

System requirements

Refer to:

https://www.hitachivantara.com/en-us/products/data-protection/data-instance-director.html#tech-specifications

For information about the supported operating systems and hardware requirements.

Resolved Problems in 6.7.8

The following issues have been resolved in this release:

ZEN-32100       Improved log messages when HDID is unable discover a VM.

ZEN-32047       Do not wait for PAIR status on replication resume.

ZEN-32036       Renamed the 'Progress' column on the replication pairs table to '%' and added a tool tip stating the following: 'The percentage value returned by pairdisplay'.

ZEN-32035       Implemented a Two-Step teardown mechanism to prevent accidental teardown of replications, see documentation addendum section for more information. Also see breaking API change section for more information.

ZEN-32024       Added the ability to filter items displayed in the replication pairs screen, pairs are now displayed in a default sort order of Remote Volume ID.

ZEN-32020       Stop continuous replication sequences evaluating on service start

ZEN-31967       Resolved an issue preventing the HDID services from starting properly on AIX version 7.1.4.34.

ZEN-31840       Prevented a UI timeout occurring if the pre-processing of the rules activation exceeds UI timeout setting.

ZEN-31831       Allow adoption operations if the storage pool is above highwater threshold as long as no new S-VOLs are required.

User Guide Addendum (Two-step teardown)

The following describes the new two-step teardown feature introduced in HDID version 6.7.8. This information will be included in the Hitachi Data Instance Director User Guide in the upcoming 6.8 release.

Concepts > Data Flow Concepts > About Data Flows > About two-step teardown

Two-step teardown reduces the possibility of inadvertently tearing down Block Storage replications due to accidental deactivation of a dataflow, or activation of an erroneous dataflow.

With two-step teardown, on deactivation of a dataflow in which a replication operation appears, or reactivation of a data flow where a replication operation has been removed, the replication is now flagged as being eligible for teardown in the Restore and Storage Inventories. When a replication operation is deactivated, the underlying replication on the hardware will continue to operate as normal, except that any further batch resynchronizations will not be scheduled.

The final teardown operation must be explicitly initiated by the user, via the storage screen.

If a user initiated teardown operation fails, it is not automatically retried, so needs to be re-initiated by the user. Teardown failure may occur in the case of GAD 3DC dataflows, where teardowns must be performed in a specific order. Prior to the introduction of two-step teardown, automatic retries were performed indefinitely until successful.

Note: A dataflow that is deactivated and then subsequently reactivated without the replication operation having been removed, re-instantiated or manually torn down before reactivation will, in effect, be re-adopted. This re-adoption removes the eligible for teardown flag from the replication, as if the dataflow had never been deactivated.

It is possible to disable two-step teardown on a per ISM basis via a config file, so different teardown policies can be implemented within the same environment.

Data Protection Workflows > Hitachi Block Workflows

The following workflows describe how to complete the teardown of deactivated Block replication operations and how to recover from an unintended deactivation.

How to remove a replication from a data flow and teardown the S-VOL(s)

An Hitachi block replication that was defined within or adopted by Data Instance Director must be explicitly removed from the underlying hardware.

Before you begin

Either:

The corresponding replication operation must be removed from the dataflow where it is defined and that dataflow must be reactivated

Or:

The dataflow defining the replication operation must be permanently deactivated.

Procedure

Locate the replication record (corresponding to the replication operation that has been removed) in the Storage Inventory > Hitachi Block Device Details > Hitachi Block Replications Inventory .

Replications that are eligible for teardown are marked with image003.pngin the top right corner of the tile.

Select the replication record to tear down, then click Teardown from the context menu.

The Teardown Hitachi Block Replication Dialog is displayed. If you are sure you want to proceed then type the word 'TEARDOWN' then click Teardown.

Go to the Jobs Inventory to ensure that a teardown job has been initiated and wait for it to complete. The replication entry is not removed from the replications inventory until the teardown operation is completed successfully. If the teardown is unsuccessful, review the logs to find out why. The teardown operation must be reinitiated by the user once the problem is resolved.

How to reactive a replication operation that has been accidently deactivated

An Hitachi block replication that was defined within or adopted by Data Instance Director may have been accidently deactivated. One of following scenarios will apply:

Case 1: Replication operation has not been removed but the data flow is deactivated

HDID considers a replication to have been removed from a dataflow only if the link between the source and destination has been removed or the source and/or destination node has been removed.

It is even possible to edit the replication parameters as long as any changes are supported by the hardware for that replication type. HDID will, in this case consider the replication instance to be the same.

Procedure

If none of the above have occurred then the data flow can simply be reactivated by the user via the Data Flows Inventory.

Because the replication has not been torn down, HDID will effectively re-adopt the corresponding replication from the storage hardware.

Case 2: Replication operation has been removed and the data flow has been reactivated

HDID considers a replication to have been removed from a dataflow if the link between the source and destination has been removed or the source and/or destination node has been removed.

Procedure

If this is the case, then the data flow must have a new replication operation added back in and then be reactivated by the user via the Data Flows Inventory.

Because HDID considers this new replication operation as an entirely new instance, the replication pair must be created from scratch on the storage array. The old replication becomes a static copy.

Case 3: The data flow containing the replication has been deleted

HDID considers a replication to have been removed.

Procedure

If this is the case, then a new data flow must be created containing a new replication operation and then be reactivated by the user via the Data Flows Inventory.

Because HDID considers this new replication operation as an entirely new instance, the replication pair must be created from scratch on the storage array. The old replication becomes a static copy.

User interface reference

The following changes have been made to the HDID user interface.

Restore Dashboard > Hitachi Block Restore Inventory

image005.jpg

Control

Description

Block Replication Tile

Replications that are eligible for teardown are marked with image003.pngin the top right corner of the tile.

Filter on Eligible for Teardown

Filter the displayed results based on eligibility for teardown.

Restore Dashboard > Hitachi Block Restore Inventory > Hitachi Block Replication Details (Restore)

image007.jpg

Control

Description

image009.jpg

Manage

Opens the Block Device Replication Details (Storage) for a replication to help find and manually teardown replications for a given dataflow.

Storage Inventory > Hitachi Block Device Details > Hitachi Block Replications Inventory

image011.jpg

Control

Description

Block Replication Tile

Replications that are eligible for teardown are marked with image003.pngin the top right corner of the tile.

image003.png

Teardown

Enabled only if one or more Replications are selected. Initiates the teardown of the selected replications. The Teardown Hitachi Block Replication Dialog is displayed to confirm teardown.

Storage Inventory > Hitachi Block Device Details > Hitachi Block Replications Inventory > Block Device Replication Details (Storage)

image013.jpg

Control

Description

image003.png

Teardown

Initiates the teardown of the replication. The Teardown Hitachi Block Replication Dialog is displayed to confirm teardown.

Teardown Hitachi Block Replication Dialog

image014.png

Control

Description

Confirmation Word

The word TEARDOWN must be explicitly typed in to confirm the teardown operation.

Breaking API change

As a result of implementing the Two-Step teardown mechanism there has been a breaking change to the API.

Prior to this change the existing process to teardown a replication through the API would be as follows:

Deactivate Dataflow:

Once a dataflow is deactivated it’s associated replications are automatically torndown

Request URL: https://localhost/HDID/master/RulesM...ctivate/invoke

Request Method: PUT

Request Payload:

{

"ids": ["1ef60d3e-d401-4c35-a534-5de5750b9061"]

}

“ids”: An array of strings representing the IDs of the dataflows to be deactivated

After the changes in the release the new process to teardown a replication through the API would be as follows:

Deactivate Dataflow:

Once a dataflow is deactivated it’s replications are marked as eligible for teardown

Request URL: https://localhost/HDID/master/RulesM...ctivate/invoke

Request Method: PUT

Request Payload:

{

"ids": ["1ef60d3e-d401-4c35-a534-5de5750b9061"]

}

“ids”: An array of strings representing the IDs of the dataflows to be deactivated

Get Dataflow Name:

To find replications for a given dataflow you need the dataflow name as the metadatastorehandler does not currently support querying by dataflow ID. To get the dataflow name if you don’t already have it, use the following query:

URL Parameters:

<Dataflow ID> - The ID of the dataflow to retrieve 

Request URL Format: https://localhost/HDID/master/DataFl...cts/DataFlows/<Dataflow ID>/

Example Request URL: https://localhost/HDID/master/DataFl...-5de5750b9061/

Request Method: GET

Status Code: 200 OK

Example Result:

{

   "id": "1ef60d3e-d401-4c35-a534-5de5750b9061",

"version": 2,

"data": {

"name": " My Dataflow ",

},

}

Required Result Data:

“data.name” : The name of a returned dataflow used for querying replication record IDs

Get Replication Record IDs and Storage Node IDs to teardown:

Once you have the dataflow name, the MDS can be queried for all records which are associated with that dataflow name and are eligible to teardown.

URL Parameters:

<Dataflow Name> - The name of the dataflow for which you wish to retrieve associated records

<Eligible for Teardown> - A Boolean value determining whether to return records that are or are not eligible for teardown 

Request URL Format: https://localhost/HDID/master/MetaDa...operties.name=<Dataflow Name>)+AND+(vsp.managedStorageActions.teardown=<Eligible for Teardown>)&count=1000&offset=0&order-by=captureDate+DESC

Example Request URL: https://localhost/HDID/master/MetaDa...ptureDate+DESC

Request Method: GET

Status Code: 200 OK

Example Result:

{

"fullRecoverableData": [

{

"moverType": "eMOVER_BATCH",

"sourceNodeType": "HardwareNodeBlock",

"sourceOsType": "eOS_UNKNOWN",

"id": "{ed7092cb-a636-441a-b8c6-175731a97f0f}",

"storageNodeId": "Portland@00-EF35F0-F23BDD-43399A-178E3E[1-1-2]",

},

]

}

Required Result Data:

“Id” – The id of a record associated to the dataflow ID requested, that is also eligible to teardown

storageNodeId” – The node ID to which the teardown request should be sent

Teardown Replication:

For each replication you wish to teardown, send a teardown request to the relevant storage node specifying the ID of the replication record to teardown. This will result in a teardown job similar to the jobs generated automatically by the old teardown process.

URL Parameters:

<Storage Node ID> - The node ID of the relevant storage node

<Replication Record ID> - The ID of the replication record to teardown

Request URL Format:

https://localhost/HDID/<Storage Node ID> /VirtualStoragePlatformHandler/objects/Replications/<Replication Record ID> /actions/Teardown/invoke

Example Request URL:

https://localhost/HDID/Portland%4000...eardown/invoke

Request Method: PUT

Status Code: 202 Accepted

The two step tear down behavior can be disabled (thus reverting to the previous behavior) by adding the following config entry to the intelligentstoragemanager.cfg file in <install_path>\db\config:

<item name="TwoStepTeardown" argtype="single">

    <value type="bool">false</value>

</item>

Features and Enhancements Added in HDID 6.7.0

VMware SRM

VMware’s product, Site Recovery Manager (SRM) manages replication between two sites and allows failover of VMs and datastores between sites all managed from vCenter.

SRM can use its own software replication but can also take advantage of array based replication if the datastores are hosted on capable hardware such as the Hitachi arrays. To enable this, we have provided a Site Recovery Adaptor (SRA) for HDID that uses the HDID API to perform the required actions.

SRA for HDID is shipped alongside HDID and has its own installer and release note. VMware will also host the adaptor on its website for download.

VMware vRO Integration

vRealize Orchestrator is a VMware tool that allows tasks to be automated using workflows. We have produced workflows that automate some HDID tasks such as backing up or restoring a VM. These workflows can be invoked from vSphere allowing a user to invoke HDID tasks straight from the VMware interface. These workflows will also be used by the Hitachi Enterprise Cloud (HEC) product offering data protection as a service.

The HDID vRO integration is shipped alongside HDID in the form of a plugin that is installed on the vRO server. It also has its own release note.

GAD 2DC Swap

HDID can now orchestrate swapping GAD replication on demand.

On demand LUN allocation

HDID is now able to add SVOLs it creates into host groups without having to perform a mount operation. When defining the dataflow additional host groups can be specified, it’s also possible to retrospectively add SVOLs into host groups from the storage screen.

Local Replication Revert

HDID can now use reverse copy to revert local replications (Shadow image and Refreshed Thin Image)

Online user guides

The user guides in html form are now part of the master installation. These user guides can be accessed from the user interface via the help menu. The online user guides are fully indexed and searchable.

Application Handler Improvements

Information about application assets such as databases is now cached on the master to provide a better user experience. The data will be refreshed automatically twice a day or when requested by a user.

This change does not affect backups which use wildcards, as this will still be evaluated at the beginning of the backup to get the latest information.

In addition to this, the application handler API has been reworked, adding consistency across applications and allowing API users to discover application details using the ApplicationNodeHandlers.

Improved Oracle database point-in-time recovery

The Oracle database point in time recovery was improved based on the feedback we collected from the field. Here are some of the improvements available in this version:

- RMAN catalog credentials can now be specified directly in the recovery dialog

- A new option allowing you to specify if the database should be opened at the end of the recovery has been introduced. This allows the database administrator to make manual adjustments before the database is opened

- It is now possible to reset the databases memory settings on restore, allowing restores of big production databases to less powerful hardware

- More powerful prechecks for recovery

- More supported scenarios for different backup types

Microsoft SQL improvements

It is now possible to perform restores of host-based Microsoft SQL databases in a state which allow further recovery.  Equivalent to what was already possible with hardware-based backups, the databases will be registered with an existing SQL Server instance and placed in the restoring state (analog to RESTORE DATABASE ... WITH NO RECOVERY). This will allow the Microsoft SQL administrator to manually apply transaction logs. 

Reworked Microsoft Exchange connectivity

The logic how HDID communicates with Microsoft Exchange environments has been reworked. The new logic will be more stable and honor permissions and roles as they are defined in Exchange.

The new logic requires a valid Exchange user which is a member of the "View Organization Management" role.

NOTE: After the upgrade, you have to specify the credentials for your Microsoft Exchange application nodes, before you can compile and distribute your Exchange-related data flows. Unless this step is performed, updated clients will not be able to create new backups.

Currently Unsupported Features

The following features are available in 5.x releases but are not currently available in HDID 6.x

Archive to Azure

CIFS backup

File Indexing

Hyper-V backup

Software Mirror

Software snapshots

The following features are available in 5.x releases but are no longer going to be available in HDID moving forward. They are only supported while 5.x versions of HDID meet the standard Hitachi Vantara support criteria.

NDMP

Tape

File Stubbing

Bare Metal Recovery

File Versioning

Block and Track

Exchange Mailbox archiving

Known Problems

Applications

ZEN-26764       When migrating an SRM setup, using the existing Hitachi SRA, to a new setup using the HDID SRA, SRM will crash if the same replication is controlled by both SRAs at the same time. To avoid this, it is necessary to set up separate SRM servers for the new HDID SRA. Once the new SRM is operational, the existing SRM servers can be decommissioned. This approach provides continuous SRM protection during the changeover process.

ZEN-24360       After an upgrade of the master, while existing policies and nodes will remain functional, it will not be possible to change or create application nodes or application policies for systems running an old version of the client. Once upgraded the application information will be automatically refreshed on a regular basis. If data displayed is not correct or nodes cannot be created, either use the refresh or rediscover functionality or wait until the scheduled data refresh is complete.

ZEN-21490       Mounting a continuous replication target for an application (e.g. Exchange, SQL, Oracle or SAP) will fail and should not be attempted. Instead use a batch in-system replication (TI or SI) or a TI snapshot for the mount.

Exchange

ZEN-23397       When using multiple Microsoft Exchange DAG environments from the same proxy, it is not possible to protect the best passive copy.

ZEN-22059       Exchange DAG databases must have at least a single passive copy in order for them to be protected when backing up all active databases in a DAG environment.

MS SQL

ZEN-23425       Intermittently an SQL node may fail to be created. If this occurs please try again.

ZEN-18320       It is only possible to automatically mount/revert and bring up Microsoft SQL server databases if there is only a single database in a backup.

VMware

ZEN-24709       The backup of VMware templates by tag may fail. Backing up by name would be a good workaround.

ZEN-22063       A VMware backup job may report success when it has only partially succeeded. Logs need to be reviewed for warnings and errors to see individual VMs that may have failed.

General

ZEN-25412       In the nodes inventory screen a Windows 2019 server will be labeled incorrectly as a Windows 2016 server.

ZEN-25165       If a directory contains a large number of files the user may not be able to see the individual files they wish to restore. If this is the case the whole directory will need to be restored.

ZEN-19353       If a user doesn't have access to all nodes involved in a job it is possible the user will not be able to see the job in the jobs screen even though logs for the job will be visible.

ZEN-18284       HDID may fail to install its filter driver on Windows 2016. If this happens it will not be possible to use this node to perform live backups / CDP to the repository. Batch backup can be used as an alternative.

ZEN-17674       When creating a node there is an opportunity to add the node to a RBAC resource group. It is however possible to add a node to multiple resource groups. To do this go to the RBAC screens.

Hardware Orchestration

ZEN-24744       Block Nodes should not be removed from the system until all associated backup records have been retired. If the node is removed the backup records will fail to be retired.

ZEN-24701       Triggering a snapshot on the destination side of a replication will fail with 'No operations triggered. Parameters do not match any existing operation.' when triggering it from the RPO report.

ZEN-24599       When a replication is in a paused or swapped state the dataflow will show a warning symbol in error.

ZEN-24227       When naming an LDEV using variables %ORIGIN_LDEV_NAME% or %PRINARY_LDEV_NAME% the call to name the LDEV will fail if the variable resolves to a name that contains special characters other than: (Space) , (Comma) - (Dash) . (Period) / (Forward Slash) : (Colon) @ (Ampersand) \ (Back Slash) _ (Underscore)

ZEN-23513       When renaming an ldev with a name that contains double underscores, the UI displays the name incorrectly.

ZEN-21914       When creating a Hitachi Block Device node for a VSP G/F 130, 150, 350, 370, 700, 900 the CCI version 01-46-03/02 or later should be used. If an older version of CCI is used the node will not describe itself correctly. This may result in the node being restricted by the license when it shouldn't be.

ZEN-21706       A live replication will not automatically detect new P-VOLs. If new P-VOLs are added the user needs to trigger the replication from the monitor screen.

ZEN-19569       HDID will not delete GAD SVOLs if the user has added additional host groups to them through HDID (e.g. for cross-path) or if additional host groups have been added outside of HDID. Additional LUN paths must be removed prior to deleting the record from HDID.

ZEN-18055       HDID will attempt to provision to a mainframe reserved LDEV ID. Ensure the defined HDID LDEV range avoids these.

ZEN-17556       Attempting to revert a snapshot for an application node will fail if the data flow containing the definition has been deactivated.

ZEN-8881         If the source machine is unavailable it is not possible to revert a snapshot using HDID. In the case of a hardware path classification the source machine does not need to be available, but it is important to unmount the volume before performing the revert.            

Installation/Upgrade

ZEN-21493       If an HDID master node is installed in a clustered environment please contact customer support for licensing help.

User Interface

ZEN-25569       The dashboard job count incorrectly includes internal jobs that are not visible in the jobs screen.         

ZEN-23564       It is possible to rename a floating snapshot from the user interface but this has no effect

ZEN-21864       Changes to a node group contents are reflected in the monitor screen before rules are redistributed. The monitor screen should show the nodes from the last rules distribution.

Copyrights and licenses

© 2019 Hitachi, Ltd. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Vantara Corporation (collectively "Hitachi"). Licensee may make copies of the Materials provided that any such copy is (i) created as an essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other copies of the Materials. "Materials" mean text, data, photographs, graphics, audio, video and documents.

Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials contain the most current information available at the time of publication.

Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for information about feature and product availability, or contact Hitachi Vantara Corporation at https://support.hitachivantara.com/e...ontact-us.html.

Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of Hitachi products is governed by the terms of your agreements with Hitachi Vantara Corporation.

By using this software, you agree that you are responsible for:

1)      Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other individuals; and

2)      Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.

Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or import the Document and any Compliant Products.

Hitachi and Lumada are trademarks or registered trademarks of Hitachi, Ltd., in the United States and other countries.

AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000, S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business Machines Corporation.

Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS, Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo, Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.

All other trademarks, service marks, and company names in this document or website are properties of their respective owners.

Copyright and license information for third-party and open source software used in Hitachi Vantara products can be found at https://www.hitachivantara.com/en-us...any/legal.html.