Hitachi VSP Gx00 models and VSP Fx00 models are powered by Hitachi Storage Virtualization Operating System (SVOS) and supported by Hitachi storage management software to enable you to effectively manage and centralize your software-defined infrastructure.
Hitachi Storage Virtualization Operating System (SVOS)
Hitachi Storage Virtualization Operating System (SVOS) is the standard operating system for Hitachi VSP G series and VSP F series storage systems. An integrated software system, SVOS works with the virtualization capabilities of the storage systems and provides the foundation for global storage virtualization. SVOS delivers software-defined storage by abstracting and managing heterogeneous storage to provide a unified virtual storage layer, resource pooling, and automation. SVOS also offers self-optimization, automation, centralized management, and increased operational efficiency for improved performance and storage utilization.
SVOS provides the following base functionality for Hitachi VSP G series and VSP F series storage systems:
Dynamic Provisioning provides thin provisioning for simplified provisioning operations, automatic performance optimization, and storage space savings.
Data reduction functions include pattern detection and removal, accelerated compression provided by Hitachi Accelerated Flash, and selectable controller-based data deduplication and compression.
Global storage virtualization capability
Global storage virtualization enables active-active clustering environment spanning multiple matched Virtual Storage Platform family storage systems (supported externally attached storage).
External storage virtualization
Enables virtualization of external heterogeneous storage using Universal Volume Manager.
Resource Partition Manager supports secure administrative partitions for multitenancy requirements.
Virtual Partition Manager supports up to 32 cache partitions.
Multipathing and failover
Dynamic Link Manager Advanced provides advanced SAN multipathing with centralized management.
Performance Monitor provides an intuitive, graphical interface to assist with performance configuration planning, workload balancing, and analyzing and optimizing storage system performance.
Storage system-based utilities
Storage system-based utilities include LUN manager, customized volume size, Data Retention Utility, quality of service controls, audit log, and volume shredder feature.
Standard management interface support
Management interface support includes SMI-S provider, SNMP agent, and REST.
Optimized storage for virtualized server infrastructure
A wide range of plugins and adapters are available to enhance virtual server infrastructure performance and administrator productivity. SVOS features integration of VMware applications (including VAAI, VASA, VAMP, VADP, SRA, VVOL) and Microsoft Windows applications (including VSS, ODX).
Hitachi Storage Virtualization Operating System is designed to deliver superior adaptive data reduction and operational efficiency. To improve return on investment and allow greater VM consolidation, SVOS adaptive data reduction intelligence is optimized for highest system throughput and response time consistency. With multithreaded capabilities and quality of service (QoS) control, SVOS adaptive intelligence has the ability to slow down or pause the data reduction processing. It takes this action if the system reaches high processor utilization level and/or if elongated wait time is experienced for data on AFA cache that is ready to be written. SVOS also manages the data reduction services in use based on configuration. If flash modules are detected, FMD compression is used. If encryption is required, SVOS compression is used.
Data reduction capabilities include:
- Pattern detection and removal
Pattern detection identifies pre-defined repetitive binary patterns, including zeros, prior to compressing and identifying duplicates. This process reduces the volume of data to be processed by the compression and deduplication engine.
- Accelerated compression
The accelerated compression feature of Dynamic Provisioning (SVOS 6.4 and later) delivers a data compression capability that enables you to realize more virtual capacity in a parity group than the actual usable capacity, providing improved storage optimization. You can enable accelerated compression at the parity-group level on HAF flash module drives (FMD DC2, FMD HD). When accelerated compression is enabled, the capacity of a parity group can be expanded up to several times. LDEVs created from an expanded-capacity parity group are used as Dynamic Provisioning pool volumes to create or expand a pool, and the data on these LDEVs is compressed before it is stored on the drives.
Implementation of accelerated compression requires careful planning, detailed calculations, and monitoring to verify the desired results. When accelerated compression is in use, both the used pool capacity and the used pool capacity reserved for writing must be monitored. Threshold values are set so that SIMs are reported when threshold values are exceeded, enabling you to expand the pool capacity or delete unwanted data before an error condition occurs (for example, pool full). For details about implementing accelerated compression, see the Provisioning Guide.
- Capacity saving
The capacity saving function includes data deduplication and compression. When the capacity saving function is in use, the controller of the storage system performs deduplication and compression to reduce the size of data to be stored, thereby reducing your bitcost for the stored data. Capacity saving can be enabled on DP-VOLs in Dynamic Provisioning pools. You can use the capacity saving function on internal flash drives only, including data stored on encrypted flash drives.
The data deduplication function deletes duplicate copies of data written to different addresses in the same pool and maintains only a single copy of the data at one address. The deduplication function is enabled on a Dynamic Provisioning pool and then on the desired DP-VOLs in the pool. When deduplication is enabled, data that has multiple copies between DP-VOLs assigned to that pool is removed.
When you enable deduplication on a pool, the deduplication system data volume (DSD volume) for that pool is created. The deduplication system data volume is used exclusively by the storage system to manage the deduplication function. A search table in the deduplication system data volume is used to locate redundant data in the pool.
The data compression function utilizes the LZ4 compression algorithm to compress the data. The compression function is also enabled per DP-VOL.
SVOS for NAS
SVOS for NAS is specifically designed for the VSP Gx00 and Fx00 models with embedded NAS modules, for NFS, SMB, and iSCSI protocols. SVOS for NAS includes native file deduplication, snapshots, two enterprise virtual NAS server licenses, NDMP, virtual server security, anti-virus, read caching, and a tiered file system for efficient unified storage management.
The NAS server can support data on an external server using Hitachi Universal Volume Manager (UVM). UVM presents storage on external storage arrays to the server as if the storage is local. To subsequently migrate data from the external storage onto the local storage, the server also supports Hitachi Tiered Storage Manager (HTSM). Using UVM instead of Universal Migrator enables the NAS server to preserve snapshots, quotas, and ACLs. UVM also has the ability to replicate a whole span (storage pool) in a single operation.
Key capabilities of the SVOS for NAS software include:
- Continuous availability
- Zero RTO and RPO for sites in case of a node, storage, or site failure.
- Flexibility for environments and sites up to 100 km.
- Support for VMware VVOL
- Increases storage efficiency through VM-centric storage allocation.
- Automated provisioning of VMs delivers quicker adjustment to business changes through Hitachi policy-driven management.
- Support for mapping individual VMs to virtual machine disks (VMDKs) delivers increased granularity and resource utilization rates.
- Enables independent enterprise virtual servers (EVSs).
- Supports hosting multiple assignments on one Hitachi NAS Platform on the same IP address; delivers true separation.
- Superior capacity efficiency
- Support for 1-PB file system.
- Primary storage deduplication to eliminate copies of redundant data.
- Support for FMD-based compression for the VSP F series.
- Intelligent file tiering
- Policy-based hierarchical storage management feature spans Hitachi NAS Platform and Hitachi Content Platform.
- Enhanced high availability
- Active-active clustering with cluster read caching for scalable, read-intensive NFS workload, incremental block replication (IBR), Hitachi NAS Replication high-speed replication, and synchronous disaster recovery service.
- Optimized file system pre-mount checks and improves NVRAM replay time for faster cluster failover.
- Nondisruptive cluster upgrades to remove updates and reduce downtime.
- Virtualization services
- VMware vStorage APIs for Array Integration (VAAI) adapter divests storage operations from VMware vSphere to Hitachi NAS.
- Virtual volumes, virtual servers, and cluster namespace unify the directory structure while simplifying storage capacity management tasks.
- Optional Hitachi Virtual Infrastructure Integrator simplifies backup, restore, and cloning operation from VMware vSphere to Hitachi NAS.
- Data management services
- Centralized GUI management, pointer-based snapshots, Hitachi NAS replication, writable snapshots, quick file restore, hard and soft quotas (volume, group, or user), NAS data migrator feature, scalable file systems, storage pools, policy-based management, and transparent data migration and relocation.
- Protocols supportedHitachi NAS can support various protocols, including:
- Internet Content Adaptation Protocol support for virus scanning.
- IPv6 support: Connect using an IPv6 address or a host name resolving to an IPv6 address through the external system management unit (SMU) software or SMU command line interface (CLI).
- Complete network protocol support
- Server Message Block (SMB) 1.0, 2.0, 3.0; SMB 3.0 encryption support; Network File System (NFS) with UDP v2 and v3 or TCP v2, v3 and v4; NDMP v2, v3 and v4; File Transfer Protocol (FTP); Secure File Transfer Protocol (sFTP); File Transfer Protocol, Secure (FTPs); iSCSI. SMB2.1 signing, and SMB secure negotiation are supported.
- Management and other protocols
- HTTP, SSL, SSH, SNMP v3, NIS, DNS, WINS, NTP, and email alerts.
Management of SVOS systems
SVOS includes the following software products to manage SVOS systems:
- Hitachi Storage Advisor (HSA)
To simplify operations, SVOS systems are managed by Hitachi Storage Advisor, a wizard-driven management application that lets IT staff configure resources in minutes and monitors the health of SVOS-managed storage at a glance.
- Hitachi Command Suite (HCS)
Hitachi Command Suite provides single-point management for all Hitachi physical and virtualized storage and is the interface for integration with other Command Suite software.
- Command Control Interface (CCI)
For more complex storage environments, CCI provides powerful command-line control and advanced functionality for Hitachi VSP G series and F series.
Overview of Storage Advisor
Hitachi Storage Advisor is a unified software management tool that reduces the complexity of managing storage systems by simplifying the setup, management, and maintenance of storage resources.
Storage Advisor reduces infrastructure management complexities and enables a new simplified approach to managing storage infrastructures. It provides intuitive graphical user interfaces and recommended configuration practices to streamline system configurations and storage management operations. You can leverage Storage Advisor to easily provision new storage capacity for business applications without requiring in-depth knowledge of the underlying infrastructure resource details. It provides centralized management while reducing the number of steps to configure, optimize, and deploy new infrastructure resources.
Some of the key Storage Advisor capabilities include:
- Simplified user experience for managing infrastructure resources. Visual aids enable easy viewing and interpretation of key management information, such as used and available capacity, and guide features to help quickly determine appropriate next steps for a given management task.
- Recommended system configurations to speed initial storage system setup and accelerate new infrastructure resource deployments.
- Integrated configuration workflows with Hitachi recommended practices to streamline storage provisioning and data protection tasks.
- Common, centralized management for supported storage systems.
- A REST-based API to provide full management programmability and control in addition to unified file-based management support.
- Storage Advisor enables automated SAN zoning during volume attach and detach. Optional auto-zoning eliminates the need for repetitive zoning tasks to be performed on the switch.
Hitachi Command Suite
Hitachi Command Suite (HCS) is an application-centric storage management solution that simplifies administration of a common pool of multivendor storage. The software offers comprehensive management, control, and discovery for file, object, and block storage services, reducing complexity, costs, and risk in the storage infrastructure.
The base HCS product consists of Hitachi Device Manager, which provides centralized management of multiple Hitachi storage systems. By providing a single console for managing complex storage environments, Device Manager software unifies and simplifies storage management. Featuring an intuitive GUI, Device Manager supports multiple management views for primary and secondary storage, including physical, logical, host, and NAS and virtual server for provisioning and storage pooling.
HCS comprises the following optional components, each of which is licensed separately:
- Hitachi Tiered Storage Manager Supports storage tiers of differing performance characteristics so that volume data storage costs and performance can be optimized.
- Hitachi Replication Manager Adds remote replication capabilities and supports backup and disaster recovery.
- Hitachi Tuning Manager Supports optimizing the performance of storage resources.
- Hitachi Compute Systems Manager Supports centralized monitoring and management of hosts, including rebooting and power management.
- Hitachi Dynamic Link Manager Supports the use of multiple paths between resources such as hosts and storage for path failover and load balancing.
- Hitachi Global Link Manager Supports management of multipath management software between resources, such as hosts and storage.
- Hitachi Automation Director Provides tools to automate and simplify the end-to-end storage provisioning process for storage and data center administrators.
At minimum, you must license Device Manager. Additional licensing can be added as needed for other storage management products. Related functionality becomes available in the HCS user interface in the form of activated menu choices and new or updated tabs and related screens and buttons.
Hitachi Command Suite offers the following benefits:
- Common administrative framework consolidates asset management across all virtualized storage resources for operational efficiency to increase storage return on investment
- Common management console to discover, configure, monitor, and report on all tiers and virtualized storage resources
- Dashboard highlights system-wide capacity usage, top consumers, and system alerts
- Logical group constructs to easily align storage resources with business applications
- Integrated management framework enables automation, mobility, service-level management, and data protection
The common management framework consolidates storage provisioning for both structured and unstructured data:
- Centrally configure storage pools for block, file, and object consumers
- Centrally manage data security, mobility, performance, and replication
- Simplified provisioning with contextual workflows
- Reduce operational expenses; manage more with less effort
- Automatically align with business applications, define tiers, and set policies by application workload for maximum performance
- Automate optimal data placement to increase storage utilization by up to 50%
- Automatically move inactive data to lower-cost storage
- Automatically move active data to highest-performing tier
- Define tiers and set policies to optimize cost
Hitachi Tuning Manager provides comprehensive storage system health monitoring and troubleshooting to deliver the operational efficiencies required to optimize shared Hitachi storage resources.
Advanced SAN multipathing
Hitachi Dynamic Link Manager offers robust multipath SAN connections between servers and storage systems. It provides fault-tolerant failover, failback, load balancing, and centralized path management, for improved information access, usability, and availability. Automatic workload balancing helps to maintain outstanding system performance across all available paths. If one path fails, Dynamic Link Manager automatically switches the I/O to an alternate path, ensuring that an active route to data is always available.
Dynamic Link Manager offers the following benefits:
- Improves system performance by spreading I/O request workload across available paths to ensure that no single path is overworked or underutilized
- Provides a high level of data availability through automatic path failover and failback, ensuring continuous access to application data, improved application performance, and reduced risk of financial loss due to failures of critical applications
- Improves availability and data access on storage systems in SAN environments, with path failover and I/O balancing over multiple HBAs
- With its health-check facility, monitors online path status at specified intervals, and places a failed path offline when an error is detected
- Provides a centralized facility for managing path failover, automatic failback, and selection of I/O balancing techniques through integration with Hitachi Global Link Manager
- Eases installation and use through the auto-discovery function, which automatically detects all available paths for failover and load balancing
- Provides one path-management tool for all your operating systems
- Includes a command line interface (CLI) that allows administrators the most flexibility in managing paths across the network
- Provides manual and automatic failover and failback support
Hitachi Replication Manager provides management capabilities to configure, manage, and monitor Hitachi replication products for local and remote sites. Replication Manager provides support for multiple data centers and multiple storage systems at each data center. It simplifies and optimizes configuration, operation, task management, automation, and monitoring of the critical applications and storage components of your replication infrastructure. The following figure shows the Replication Manager interface.
Replication Manager offers the following benefits:
Replication Manager can be used to manage storage systems and hosts at different sites. The status of copy pairs, the progress of copy operations, and performance information (such as data transfer delays between copy pairs and buffer usage when copying volumes) can be centrally managed from a single console.
Replication Manager supports creating backups of databases. Called application replicas, these backups are managed as a series of secondary volumes that are rotated on a scheduled basis. Replication Manager manages the relationships between backup objects and their associated logical units within storage devices, the relationships between primary and secondary volumes, and the backup history. Replicas can be mounted and dumped to tape using scripts executed through Replication Manager.
Replication Manager provides a centralized workspace where you can visually check the structure of copy pairs configured across multiple storage systems. Host and storage system relationships and copy pair definitions can be visualized using functional views. Copy pairs in complex configurations such as multitarget configurations and cascade configurations can be viewed as lists.
Replication Manager provides capabilities to specify monitoring conditions for designated copy pairs and sidefiles. Alerts can be automatically generated when the conditions are satisfied. You can continue monitoring the system even when not logged in to Replication Manager because alerts can be reported in the form of email messages or SNMP traps. The status of application replicas is tracked and reflected in summary form so that you know to what extent the application databases are protected. These monitoring features allow you to work out advance strategies to handle potential problems such as the deterioration of transfer performance due to insufficient network capacity or blocked pairs caused by buffer overflows.
Replication Manager provides capabilities to configure additional copy pairs as business operations expand and improve performance by expanding buffer capacity for copying volumes. You can also change pair states manually after error recovery. Using the wizards provided in the GUI, you can set up pairs while visually keeping track of complex replication structures.
When using Universal Replicator, you can check copy performance visually and perform root cause analysis using the Replication tab of the Hitachi Command Suite GUI.
Command Control Interface
Command Control Interface (CCI) CLI software provides powerful command-line control for Hitachi Virtual Storage Platform family storage systems, enabling you to perform storage system configuration and data management operations by issuing commands to the storage systems.
CCI provides command-line control and advanced functionality for local and remote replication operations, including ShadowImage, Thin Image, TrueCopy, Universal Replicator, and global-active device. CCI commands can be used interactively or in scripts to automate and standardize storage administration functions, thereby simplifying the job of the storage administrator and reducing administration costs. For remote replication operations, CCI interfaces with the system software and high-availability (HA) software on the host as well as the software on the storage system. CCI provides failover operation commands that support mutual hot standby in conjunction with industry-standard failover products. Using CCI scripting, you can set up and execute a large number of commands in a short period of time while integrating host-based high-availability control over copy operations.
For VSP G series and VSP F series, CCI provides command-line access to the same provisioning operations that are available in Hitachi Device Manager - Storage Navigator. Because some provisioning operations can take time to process, CCI provides two ways to execute the configuration setting command: synchronously or asynchronously. Asynchronous command processing is used for operations that take time to process on the storage system. Once an asynchronous command has been issued, you can execute additional commands without having to wait for the asynchronous command to complete, and you can also monitor the completion status of asynchronous commands.
Advanced global storage virtualization and software bundles
Optional SVOS features include best-in-class local and remote replication technologies as well as active-active metro clustering to provide rapid recovery from system and site-level events that could disrupt access to data. SVOS business continuity solutions are designed for maximum flexibility, enabling organizations to build a recovery strategy that spans multiple data centers and delivers to their specific SLAs.
Optional software products and packages for SVOS systems include:
- Hitachi Data Mobility package increases storage performance and lowers costs with automated data placement.
- Global-active device feature license enables active-active storage clusters that span data centers for business continuity and superior data sharing.
- Nondisruptive migration delivers large-scale migration capabilities that require less time and effort to execute and deliver continuous operations while ensuring application quality of service and maintaining data protection.
- Hitachi Local Replication package quickly creates space-efficient, point-in-time snapshots, eliminating the need for a traditional backup window and enabling fast recovery.
- Hitachi Remote Replication package includes synchronous and asynchronous replication providing zero RPO and near-zero RTO capabilities across three or even four geographically dispersed locations.
- Data-at-Rest Encryption software protects data at rest on internal storage media for enhanced data privacy and compliance.
Hitachi Data Mobility software
By simplifying tiered storage management, Hitachi Data Mobility software delivers the highest storage performance for the most frequently accessed data while at the same time lowering costs by automatically optimizing data placement.
Hitachi Data Mobility software provides complete data movement capabilities. It combines two leading data mobility technologies with Hitachi Dynamic Tiering and Hitachi Tiered Storage Manager software. The combination enables intelligent placement of data within virtualized Hitachi storage environments while optimizing business application service levels.
- Hitachi Dynamic Tiering automates data placement and access in a tiered storage environment. It dynamically moves the most active data to the highest-performing storage tiers while moving less frequently accessed data to lower tiers. Hitachi Dynamic Tiering active-flash mode moves suddenly active data via synchronous promotion to higher-performing tiers in real time. In seconds to subseconds, active flash responds to workload demands based on current I/O activity. Active flash proactively preserves flash endurance by monitoring and demoting pages that exceed thresholds for heavy write I/O.
- Hitachi Tiered Storage Manager enables administrators to proactively match business application price, performance, and availability characteristics to storage resource attributes. Administrators can proactively create and pool different storage classes to maximize operational and cost efficiency and easily align them to specific business application needs. As storage service levels change over time, Tiered Storage Manager facilitates nondisruptive data migration between storage tiers and externally virtualized storage resources to match new application requirements. Through custom data management policies, Tiered Storage Manager helps you to properly monitor and control the automated and active behavior of Hitachi Dynamic Tiering.
High availability with global-active device
Global-active device (GAD) uses volume replication to provide a high-availability environment for hosts across storage systems and sites. Global-active device provides data protection and minimizes data-access disruptions for host applications due to site or storage system failures, ensuring continuous, simplified operations in distributed environments. Efficient and scalable active-active design gives you continuous application availability for both traditional and cloud storage. Active-active stretched clusters over local and metro distances allow application access to replicated data from the shortest path, for the highest performance. Global-active device works seamlessly with other advanced capabilities of SVOS to simplify and improve disaster recovery operations and dramatically reduce return-to-operations time, enabling customers to meet strict service-level agreements for zero or near-zero recovery point objective (RPO) and recovery time objective (RTO).
Establishing a global-active device pair has the following benefits:
If a primary volume becomes unavailable, the host continues to transparently access the secondary volume.
You do not need to perform storage system tasks such as suspension or resynchronization of a global-active device pair due to a host failure.
Virtual machine integration
If a virtual machine is creating a high load at one site, you can move the load to the other site, eliminating the need for data migration.
A GAD pair consists of a primary data volume and a synchronous, remote copy on Hitachi VSP G series storage systems at the primary and secondary sites. A virtual storage machine is set up in the secondary VSP G series storage system using the physical information from the primary system. The GAD primary and secondary volumes are assigned the same virtual LDEV number in the virtual storage machine. As a result, the host treats the paired volumes as a single volume on a single storage system, with both volumes receiving the same data from the host.
The following figure shows an example GAD configuration.
GAD pair volumes are monitored by a quorum disk (preferably located at a third site). The quorum disk acts as a heartbeat for the GAD pair, with the primary and secondary storage systems accessing the quorum disk periodically to check on the other storage system. In the event of a communication or hardware failure, the quorum disk determines which storage system is still accessible, allowing operations to continue without interruption.
The SAN multipathing software on the host runs in an active-active configuration. If the primary volume (P-VOL) or secondary volume (S-VOL) cannot be accessed, host I/O is automatically redirected to an alternative path. Native multipath software operates at campus distances using cross-site paths (as shown in the previous diagram). At metro distances, Hitachi Dynamic Link Manager (HDLM) offers increased performance using preferred paths (shortest possible route).
Global-active device requires three storage systems: primary, secondary, and an external system used for the quorum disk. The configuration can be set up across one, two, or three sites.
- In a three-site configuration (recommended), each storage system is located at a separate site. This configuration provides maximum protection against system failures and site failures.
- In a two-site configuration, both the primary storage system and the quorum storage system are located at the primary site. This configuration provides a moderate level of protection against system and site failures.
- In a one-site configuration (not shown), all storage systems are located at the same site. This configuration protects against system failures but not site-wide failures.
For details about GAD configurations, requirements, and setup, see the following documentation:
- Global-Active Device User Guide
- Hitachi Command Suite User Guide
- Hitachi Command Suite Dynamic Link Manager documentation
In a GAD system, the server accesses the primary site and the secondary site simultaneously and shares the same data between the two sites (at campus distance). If a failure occurs at one site, you can continue operations at the other site. However, if a failure occurs at both sites, for example due to a large-scale disaster, you cannot continue operations with the data redundancy provided by only global-active device.
To manage this situation, you can implement a 3-data-center (3DC) configuration by combining GAD and Universal Replicator (UR). This is called a GAD 3DC delta resync (GAD+UR) configuration. If a failure occurs at both the primary site and the GAD secondary site, the GAD+UR configuration enables you to continue operations using the UR secondary site (at metro distance).
For more information about GAD 3DC delta resync operations, see the following documents:
- Global-Active Device User Guide
- Hitachi Universal Replicator User Guide
- Hitachi Command Suite User Guide
- Hitachi Command Suite Replication Manager User Guide
GAD Enhanced for NAS takes advantage of the GAD feature to cluster two VSP Gx00 or Fx00 systems with NAS modules across two sites. This synchronous disaster recovery configuration, also referred to as a stretched cluster, creates a four-node cluster stretched across two sites within 100 km of each other.
For more information about this special configuration, contact your Hitachi Vantara representative.
When the paths connecting a server and a storage system in a GAD configuration contain a short-distance straight path and a long-distance cross path, I/O performance varies depending on the path. Using Asymmetric Logical Unit Access (ALUA), you can set the short-distance straight path as the preferred I/O path and the inefficient long-distance cross path as the nonpreferred path to improve overall system performance.
To use ALUA to set the preferred and nonpreferred paths for GAD pairs in a cross-path configuration, you first enable the ALUA mode on the storage system, which sets all paths as preferred paths, and then you set the asymmetric access status of the cross path as a nonpreferred path. For details and instructions, see the Global-Active Device User Guide.
One of the biggest challenges during technology refresh cycles is to eliminate downtime and service disruption when the data used by the host is copied to a new volume on the new storage system and the host is reconfigured to access the new volume. Nondisruptive migration makes it possible to relocate data from existing storage systems to newer storage systems without interrupting access by hosts. Data migration is accomplished using the global storage virtualization technology of the target storage systems. Resources on the source storage system are virtualized on the target storage system. From the perspective of the host, I/O requests continue to be serviced by the source storage system during the migration process.
The following storage system combinations are supported:
|Hitachi Universal Storage Platform V/VM||VSP G1000, VSP G1500, and VSP F1500|
|Hitachi Universal Storage Platform V/VM||VSP Gx00 models|
|Hitachi Virtual Storage Platform||VSP G1000, VSP G1500, and VSP F1500|
|Hitachi Virtual Storage Platform||VSP Gx00 models|
|Hitachi Unified Storage VM||VSP G1000, VSP G1500, and VSP F1500|
|Hitachi Unified Storage VM||VSP Gx00 models|
Nondisruptive migration offers these benefits:
- Data is migrated between storage systems without interrupting host applications.
- You can maintain data replication throughout the migration process by allowing the target storage system to inherit pair configurations before migrating the actual data.
- You can reduce the overall migration effort by importing configuration definition files instead of having to reconfigure pairs on the target storage system.
- The migration process is designed to be carried out in stages to reduce demands on network bandwidth.
- You can easily monitor migration project and migration job progress and status by reviewing both numerical and graphical data, which includes estimated information about how long the migration is likely to take.
- Up to seven source storage systems can be consolidated into a single target storage system.
The following workflow summarizes the stages of the migration process.
- A virtual storage machine is created in the target storage system, a representation of the source storage system that behaves exactly like its physical counterpart (with the same name and serial number).
- The source volume is mapped within the virtual storage machine as a virtual device (with the same LDEV ID as the source volume). This is known as the target volume.
The HCS nondisruptive migration workflow prompts you to perform the following operations manually:
- Initiate I/O between the target storage system and the host.
- Disable I/O between the source storage system and the host.
You must do this using path management software (such as Dynamic Link Manager), OS native multipath functions, or by changing the zoning configuration. When you confirm that the switch was successful, the I/O path is changed.
Initially, read and write requests continue to be processed by the source storage system. This is known as cache through mode, and is in effect while the volume on the source storage system remains connected to the host.
To prevent the host from accessing the source volume through the source storage system, the HCS nondisruptive migration workflow reminds you to delete the LUN path between the source volumes and the host before continuing.
When you disable the connection between the host and the volume on the source storage system, the cache is switched to write sync mode. Thereafter, all read and write requests are handled by the target storage system, and data is written to both the source and target volumes.
If you plan to migrate secondary volumes, the HCS nondisruptive migration workflow leads you through the process of re-creating the source secondary volumes on the target storage system.
In this stage, the data is copied to its final destination on the target storage system.
The following figure shows a nondisruptive migration configuration with secondary volumes and multiple servers. The term backup server is used because this server is responsible for running the scripts that copy the data from the primary to the secondary volumes.
For a complete description of the nondisruptive migration feature, including requirements and setup, see the Nondisruptive Migration User Guide and the Hitachi Command Suite User Guide.
Hitachi Local Replication software
Hitachi Local Replication software combines Hitachi ShadowImage®, Hitachi Thin Image, and Hitachi Replication Manager to deliver convenient and cost-effective full-volume data cloning for fast, point-in-time data copies. Hitachi Local replication ensures rapid restart-and-recovery times by combining local mirroring of full volumes with fast, space-efficient snapshots.
- High-speed, nondisruptive local mirroring technology of Hitachi ShadowImage® replication software rapidly creates multiple copies of mission-critical information within all Hitachi storage systems. ShadowImage software keeps data RAID-protected and fully recoverable, without affecting service or performance levels. Replicated data volumes can then be split from the host applications and used for system backups, application testing and data mining applications, while business continues to run at full capacity.
- The high-speed, nondisruptive snapshot technology of Hitachi Thin Image snapshot software rapidly creates up to one million point-in-time copies of mission-critical information within any Hitachi storage system or virtualized storage pool, without impacting host service or performance levels. Because snapshots store only the changed data, the storage capacity required for each snapshot copy is substantially smaller than the source volume. As a result, Thin Image can provide significant savings over full volume cloning methods. Thin Image snapshot copies are fully read/write compatible with other hosts and can be used for system backups, application testing, and data mining applications while the business continues to run at full capacity.
- Part of Hitachi Command Suite, Replication Manager software configures, monitors, and manages Hitachi local and remote replication products.
Application-consistent ShadowImage clones and Thin Image snapshots can be orchestrated using Hitachi Data Instance Director (HDID) software. HDID supports Microsoft® Exchange and SQL Server® as well as Oracle databases on Linux operating systems. These clones and snapshots can be easily created as part of a complete data protection workflow, using HDID's unique whiteboard-like interface. HDID can also trigger a ShadowImage clone or Thin Image snapshot on the remote side of a distance replication pair.
Hitachi Vantara Global Services Solutions provides Implementation Services for Hitachi ShadowImage® and Hitachi Thin Image software. These services help organizations improve testing and application deployment operations with high-speed, problem-free data duplication. Consultants tailor the configuration and integration of the local replication software to serve an organization's backup and recovery application requirements.
Hitachi Remote Replication software
Hitachi Remote Replication software combines Hitachi TrueCopy®, Hitachi Universal Replicator, and Hitachi Replication Manager solutions to enable remote data protection at up to four data centers. Providing continuous, nondisruptive, host-independent data replication, Hitachi Remote Replication software ensures the highest levels of data integrity for local or metropolitan areas. Copies generated by Hitachi Remote Replication software products can be used for the rapid recovery or restart of production systems on primary or secondary (disaster recovery) systems following an outage. They can also be used for nondisruptive test and development, data warehousing, data mining, data backup, or data migration applications.
- Hitachi TrueCopy® enables synchronous remote replication of mission-critical data from a primary data center to a secondary data center at distances up to 300 km. TrueCopy delivers immediate zero-RPO and automated failover capabilities.
- Hitachi Universal Replicator features journal disk caching for achieving tight RPO time capabilities, even in the event of a network outage. Universal Replicator provides asynchronous remote copy, over any distance, for Hitachi VSP G series and VSP F series storage. Deployed implementations can be configured with or without delta resync, which ensures replication consistency for the highest level of remote copy data integrity at any distance.
- Part of Hitachi Command Suite, Replication Manager software configures, monitors, and manages Hitachi local and remote replication products.
TrueCopy and Universal Replicator can also be automated as part of an end-to-end, unified data protection, retention, and recovery management solution within Hitachi Data Instance Director (HDID) software. HDID can also automatically trigger Thin Image snapshots and ShadowImage clones from the remote copy of the data.
From remote copy planning to advanced implementation services, Hitachi Vantara Global Services Solutions can support the successful and timely deployment of the most resilient data protection infrastructures. Services to support TrueCopy and Universal Replicator software and other business continuity and disaster recovery solutions from Hitachi Vantara are available.
Hitachi VSP G series and VSP F series storage systems provide a performance-friendly AES-256-XTS encryption capability on the back-end I/O module. This capability protects data at rest on internal storage media and volumes attached to those directors. When data is encrypted, information leakage can be prevented when replacing the storage system or the drives in the storage system. Similarly, the encryption capability provides an extra measure of protection and confidentiality for lost, stolen, or misplaced media that may contain sensitive information.
The data-at-rest encryption feature has two components: the encrypting back-end director hardware component and the Encryption software license. Encryption can be applied to some or all of the internal drives with no throughput or latency impacts for data I/O and little or no disruption to existing applications and infrastructure. Data-at-rest encryption includes integrated key management functionality that is both simple and safe to use, providing a unique encryption key for each individual piece of media internal to the array.
Data-at-rest encryption is configured and monitored through the Hitachi Command Suite and Device Manager - Storage Navigator management software, providing role-based access control (RBAC) for the separation of duties including enabling/disabling encryption as well as archiving encryption keys.
The Hitachi approach to software-defined solutions enables you to effectively manage your IT infrastructure to align storage resources to rapidly changing business demands, achieve superior returns on infrastructure investments, and minimize operational costs. Hitachi's suite of management software delivers higher storage availability, mobility, and optimization for key business applications, automating storage management operations with integrated best practices to accelerate new resource deployments. Using Hitachi's storage management software, administrators are able to manage more storage capacity with less effort and ensure service levels for business-critical applications are met while increasing utilization and performance of virtualized storage assets.
Management software for Hitachi VSP G series and VSP F series includes:
- Hitachi Data Instance Director (HDID)
- Hitachi Data Center Analytics (HDCA)
- Hitachi Automation Director (HAD)
- Hitachi Infrastructure Analytics Advisor (HIAA)
Hitachi Data Instance Director
Hitachi Data Instance Director (HDID) provides business-defined data protection, which simplifies the creation and management of complex, business-defined policies to meet service level objectives for availability.
HDID supports the Hitachi VSP G series storage systems, offering an orchestration layer for remote replication supporting Hitachi TrueCopy® and Hitachi Universal Replicator, local and remote snapshots and clones with Hitachi Thin Image and Hitachi ShadowImage®, continuous data protection, and incremental backup.
HDID provides the following benefits:
HDID offers two approaches to meeting operational recovery requirements, depending on whether the data being protected is stored on Hitachi storage.
- Storage-based operational recovery
HDID configures, automates and orchestrates local application-consistent snapshot and clone copies using the local replication capabilities of Hitachi Virtual Storage Platform (VSP) family, Hitachi Unified Storage VM (HUS VM), and Hitachi NAS Platform (HNAS).
This integration provides the ability to create fast, frequent copies of production data, with no impact on the performance of the production system. Very aggressive recovery point objectives (RPO) can be easily achieved for Microsoft® Exchange and Microsoft SQL Server® on Microsoft Windows® platforms, for Oracle database environments on Linux, AIX, and Solaris, and for SAP HANA environments. HDID is integrated with Hitachi Virtual Infrastructure Integrator (V2I) to provide storage-based protection of VMware vSphere® environments. Other applications can also be integrated using the simple scripting interface.
These snapshots and clones can be mounted and unmounted automatically as part of an HDID policy workflow. They can facilitate access to a current copy of production data for secondary purposes such as test and development, or backup to a target device such as a purpose-built backup appliance (PBBA) or tape. HDID administrators can also view and restore storage-based snapshots created in VMware environments by Hitachi Virtual Infrastructure Integrator.
- Host-based operational recovery
HDID includes several storage-agnostic technologies for protection of application and file system data. Continuous data protection (CDP) and live backup support Windows environments, with application-specific support for Exchange and SQL Server. Batch mode backup is supported on Windows, Linux and IBM® AIX® systems.
HDID provides storage-based and software based choices for restoring operations at, or from, another location following a site level outage.
- Storage-based disaster recovery
HDID configures and automates Hitachi TrueCopy synchronous remote replication software and Hitachi Universal Replicator software on block-based systems, and file replication on HNAS, to provide a copy of data in another location. HDID can also orchestrate application-aware snapshots of these remote replicas.
- Host-based disaster recovery
The backup data stored locally by HDID can be asynchronously replicated, on a scheduled basis, to another location. It does not require specific storage for either the primary or disaster recovery copy.
With HDID, moving Microsoft Exchange and Windows file data to Hitachi Content Platform (HCP) for archiving enables your administrators to reduce the amount of data in their production systems and meet corporate and regulatory data retention requirements.
Leave the archived file on the source system, delete it or leave a stub file as a pointer. HDID archives files as individual objects that can be easily viewed, retrieved or audited with standard HCP tools. No special software is needed to unpack or decode the archived files.
One of the many benefits of Hitachi Data Instance Director is its single-footprint platform. It enables you to layer, combine and orchestrate backup, CDP, snapshots, replication and archive to achieve the specific service levels of data recovery and retention each application requires.
The unique graphical user interface (GUI) incorporates a powerful policy builder that resembles laying out business processes on a whiteboard. Easily create and change policies as needed, visualize data protection processes, and align them with management processes.
Additional features of HDID include:
- Block-level, incremental-forever data capture dramatically reduces the storage capacity needed for copy data, as compared to traditional full + incremental methods.
- To further reduce downtime, bare metal recovery images can be created using standard backup processes. The operating system volume and application volumes can be recovered in a single operation.
- HDID supports a range of storage repositories, including block, file, object, Microsoft Azure and tape storage.
- HDID scales seamlessly to manage hundreds of terabytes of data.
Hitachi Automation Director
Hitachi Automation Director is a software solution that provides tools to automate and simplify the end-to-end storage provisioning process for storage and data center administrators. The building blocks of the product are prepackaged automation templates known as service templates. These templates can be customized to your specific environment and processes creating services that automate complex tasks such as resource provisioning. When Automation Director is configured, it integrates with existing Hitachi Command Suite applications, including Hitachi Device Manager and Hitachi Tuning Manager, to automate common infrastructure management tasks by using your existing infrastructure services.
Some of the key features of Automation Director are:
- Automation services for intelligent provisioning of volumes from different storage classes.
- Preconfigured service templates that help you create customized automation services.
- Role-based access to defined services.
- Intelligent pool selection based on an algorithm that chooses the best pools in terms of performance and capacity.
- Common service management attributes that can be assigned and shared across all automation services.
- A REST API for application integration.
- The ability to create infrastructure groups based on customer needs and environment.
Hitachi Automation Director offers the following benefits:
- Provisioning is simplified through use of service templates that can automate workflow, resulting in additional OPEX savings.
- Service customization can be performed by skilled storage administrators, increasing the efficiency of resource usage and reducing human error.
- Simplified infrastructure management, including classification of storage systems and high-level grouping of resources, significantly improves storage management and provides efficient utilization of resources.
- The ability to customize pre-defined service templates, by using the Service Builder tool, to address an organization's changing needs.
- The REST API facilitates integration of Automation Director with relevant IT automation processes.
Hitachi Data Center Analytics
Hitachi Data Center Analytics (HDCA) is a storage performance analytics application that includes a highly scalable data repository and analytics engine for historical performance and capacity trending across the data center. HDCA provides deep and granular performance monitoring and reporting to aid users in identifying infrastructure bottlenecks and trends in order to optimize both application and storage system performance. This software enables a common, centralized storage analytics solution for Hitachi and multi-vendor storage environments, thus reducing the need for vendor-specific performance analytic tools. HDCA provides multi-vendor storage system support for Hitachi and third-party storage system environments.
Hitachi Infrastructure Analytics Advisor
Hitachi Infrastructure Analytics Advisor (HIAA) is a data center management software that monitors, reports, and correlates end-to-end performance from server to storage. HIAA supports monitoring of Hitachi VSP G series and VSP F series storage systems. With Infrastructure Analytics Advisor, you can define and monitor storage service-level objectives (SLOs) for resource performance. You can identify and analyze historical performance trends to optimize storage system performance and plan for capacity growth. When a performance hot spot is identified or a service-level threshold is exceeded, integrated diagnostic engine aids in diagnosing, troubleshooting, and finding the root cause of performance bottlenecks.
Using Infrastructure Analytics Advisor, you register resources (storage systems, hosts, servers, and volumes) and set service-level thresholds. You are alerted to threshold violations and possible performance problems (bottlenecks). Using analytics tools, you find which resource has a problem and analyze its cause to help solve the problem.
The following figure describes how the Infrastructure Analytics Advisor ensures the performance of your storage environment based on real-time service level objectives (SLOs).
The system administrator uses Hitachi Infrastructure Analytics Advisor (HIAA) to manage and monitor the IT infrastructure based on SLOs, which match the service-implementation guidelines that are negotiated under a service level agreement (SLA) with consumers.
Infrastructure Analytics Advisor monitors the health of the IT infrastructure using performance indicators and generates alerts when SLOs are at risk.
Having data center expertise, the service administrator uses Infrastructure Analytics Advisor to assign resources, such as VMs and storage capacity from registered storage systems, to consumer applications. The purpose of doing this is to manage critical SLO violations and to ensure that service performance meets the service level agreements.