Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Software components and features

Virtual Storage Platform 5000 series (VSP 5000 series) is powered by Hitachi's Storage Virtualization Operating System RF (SVOS RF) and supported by Hitachi storage management software, enabling you to effectively manage, centralize, and control your software-defined infrastructure while at the same time reducing complexity, costs, and risk.

Storage Virtualization Operating System RF

SVOS RF delivers best-in-class business continuity and data availability and simplifies storage management. Flash performance is optimized with a patented flash-aware I/O stack to further accelerate data access. Adaptive inline data reduction increases storage efficiency while enabling a balance of data efficiency and application performance. Industry-leading storage virtualization allows SVOS RF to use third-party all-flash and hybrid arrays as storage capacity, consolidating resources and extending the life of storage investments.

SVOS RF works with the virtualization capabilities of the Hitachi VSP storage systems to provide the foundation for global storage virtualization. SVOS RF delivers software-defined storage by abstracting and managing heterogeneous storage to provide a unified virtual storage layer, resource pooling, and automation. SVOS RF also offers self-optimization, automation, centralized management, and increased operational efficiency for improved performance and storage utilization. Optimized for flash storage, SVOS RF provides adaptive inline data reduction to keep response times low as data levels grow, and selectable services enable data-reduction technologies to be activated based on workload benefit.

SVOS RF integrates with Hitachi’s Base and Advanced software packages to deliver superior availability and operational efficiency. You gain active-active clustering, data-at-rest encryption, insights via machine learning, and policy-defined data protection with local and remote replication.

  • Base software package

    The Base software package, which comes standard on all VSP 5000 series, delivers software to simplify management and protection of your data and includes best-class analytics software to improve uptime and ROI of IT operations.

    The Base software package includes:

    • SVOS RF core functionality, including Universal Volume Manager for storage virtualization
    • Hitachi Ops Center Administrator for simple, GUI system management
    • Local replication for cloning and snapshots
    • Hitachi Data Instance Director for copy management and data protection
    • Hitachi Ops Center Analyzer for data-center-wide, AI-powered insights
    • Data Mobility for tiering between storage arrays and media types
  • Advanced software package

    When business continuity is critical, or when you need to automate and accelerate delivery of IT resources, you can upgrade to the Advanced software package. Remote replication and metroclustering software enables delivery of continuous, scalable data access. Intelligent automation software simplifies and enhances provisioning of resources to reduce operational overhead and avoid misconfigurations.

    The Advanced software package includes:

    • All Base package products and features
    • Remote replication (Universal Replicator and TrueCopy) for disaster recovery
    • Global-active device (GAD) for business continuity and metro clustering
    • Hitachi Ops Center Automator for data-center-wide workflow automation and orchestration

In-System Replication software

Hitachi's In-System Replication software for VSP 5000 series ensures rapid restart-and-recovery times by combining local mirroring of full volumes with fast, space-efficient snapshots.

  • High-speed, nondisruptive in-system mirroring technology of Hitachi ShadowImage® rapidly creates multiple copies of mission-critical information within the storage system in mainframe and open-systems environments. ShadowImage keeps data RAID-protected and fully recoverable, without affecting service or performance levels. Replicated data volumes can then be split from the host applications and used for system backups, application testing, and data mining applications, while business continues to run at full capacity.
  • The high-speed, nondisruptive snapshot technology of Hitachi Thin Image snapshot software rapidly creates copies of mission-critical information within the storage system or virtualized storage pool without impacting host service or performance levels. Because snapshots store only the changed data, the storage capacity required for each snapshot copy is substantially less than the capacity of the source volume. As a result, Thin Image can provide significant savings over full-volume cloning methods. Thin Image snapshot copies are fully read/write compatible with other hosts and can be used for system backups, application testing, and data mining applications.

Application-consistent ShadowImage clones and Thin Image snapshots can be orchestrated using Hitachi Data Instance Director (HDID) software. HDID supports Microsoft® Exchange and SQL Server® as well as Oracle databases on Linux operating systems. These clones and snapshots can be easily created as part of a complete data protection workflow. HDID can also trigger a ShadowImage clone or Thin Image snapshot on the remote side of a distance replication pair.

Hitachi Vantara Global Services Solutions provides Implementation Services for in-system replication software. These services improve testing and application deployment operations with high-speed, problem-free data duplication. Hitachi Vantara consultants tailor the configuration and integration of the in-system replication software to meet your backup and recovery application requirements.

Remote Replication software

Hitachi's Remote Replication software for VSP 5000 series combines Hitachi TrueCopy® and Universal Replicator solutions to enable remote data protection at up to four data centers. Providing continuous, nondisruptive, host-independent data replication, Hitachi Remote Replication software ensures the highest levels of data integrity for local or metropolitan areas. Copies generated by Hitachi Remote Replication software products can be used for the rapid recovery or restart of production systems on primary or secondary (disaster recovery) systems following an outage. They can also be used for nondisruptive test and development, data warehousing, data mining, data backup, and data migration applications.

SVOS RF business continuity solutions are designed for maximum flexibility, enabling organizations to build a recovery strategy that spans multiple data centers and delivers to their specific SLAs.

  • Hitachi TrueCopy® enables synchronous remote replication of mission-critical data from a primary data center to a secondary data center. TrueCopy delivers immediate zero-RPO and automated failover capabilities and is compatible with open-systems and mainframe environments.
  • Universal Replicator features journal disk caching for achieving tight RPO time capabilities, even in the event of a network outage. Universal Replicator provides asynchronous remote copy, over any distance and is compatible with open-systems and mainframe environments. Deployed implementations can be configured with or without delta resync, which ensures replication consistency for the highest level of remote copy data integrity at any distance.

TrueCopy and Universal Replicator can also be automated as part of an end-to-end, unified data protection, retention, and recovery management solution within Hitachi Data Instance Director (HDID) software. HDID can also automatically trigger Thin Image snapshots and ShadowImage clones from the remote copy of the data.

From remote copy planning to advanced implementation services, Hitachi Vantara Global Services Solutions can support the successful and timely deployment of the most resilient data protection infrastructures. Services to support TrueCopy and Universal Replicator software and other business continuity and disaster recovery solutions from Hitachi Vantara are available.

High availability with global-active device

Global-active device (GAD) simplifies and automates high availability to ensure continuous operations for mission-critical data and applications. GAD provides full metroclustering between data centers that can be up to 500 km apart. Supporting read/write copies of the same data in two places at the same time, GAD's active-active design implements cross-mirrored storage volumes between matched VSP storage systems to protect data and minimize data-access disruptions for host applications due to site or storage system failures. GAD ensures that up-to-date data is always available and enables production workloads on both systems, while maintaining full data consistency and protection.high-level GAD architecture

Global-active device volume pairs have the following benefits:

  • Continuous I/OIf a primary volume becomes unavailable, the host continues to transparently access the secondary volume.
  • Clustered failoverYou do not need to perform storage system tasks such as suspension or resynchronization of GAD pairs due to a host failure.
  • Virtual machine integrationIf a virtual machine is creating a high load at one site, you can move the load to the other site.
  • High performanceMultipath software allows application access to mirrored data from the shortest path for highest performance.
  • Workload mobilityThe concurrent data mirroring capability of global-active device makes data immediately available to servers at a second site (over metro distances).
  • Nondisruptive data migrationData volumes can be migrated between storage systems without disruption to normal operations.

Data Mobility software

By simplifying tiered storage management, Hitachi's Data Mobility software delivers the highest storage performance for the most frequently accessed data while at the same time lowering costs by automatically optimizing data placement.

Hitachi Data Mobility software automatically and transparently moves data across tiers of storage, maximizing business application service levels while minimizing costs. Support for a broad range of storage media, configurations, and virtualized third-party arrays facilitates seamless data migration from older to newer Hitachi storage.

  • Dynamic Tiering automates data placement and access in a tiered storage environment, dynamically moving the most active data to the highest-performing storage tiers while moving less frequently accessed data to lower tiers. An additional active-flash mode moves suddenly active data to higher-performing tiers in real time. In seconds to subseconds, active flash responds to workload demands based on current I/O activity and proactively preserves flash endurance by monitoring and demoting pages that exceed thresholds for heavy write I/O.
  • Nondisruptive data migration is accomplished using the global storage virtualization technology of the Hitachi VSP storage systems. Resources on the migration source storage system are virtualized on the target storage system. From the perspective of the host, I/O requests continue to be serviced by the source storage system during the migration process.

Data-at-rest encryption

The Encryption License Key feature of VSP 5000 series protects your sensitive data against breaches associated with storage media (for example, loss or theft). Encryption License Key includes a controller-based encryption implementation as well as integrated key management functionality that can leverage third-party key management solutions via the OASIS Key Management Interoperability Protocol (KMIP).

The data at-rest encryption (DARE) functionality is implemented using cryptographic chips included as part of the encryption hardware. The encryption hardware encrypts and decrypts data as it is being written to and read from the physical drives. The key management functionality controls the full key life cycle, including the generation, distribution, storage, backup/recovery, rekeying, and destruction of keys. In addition, the design of this key management functionality includes protections against key corruption (for example, integrity checks on keys) as well as key backups (both primary and secondary).

The Encryption License Key feature provides the following benefits:

  • Hardware-based Advanced Encryption Standard (AES) encryption, using 256-bit keys in the XTS mode of operation, is provided for open and mainframe systems.
  • Encryption can be applied to some or all supported internal drives.
  • Each encrypted internal drive is protected with a unique data encryption key.
  • Encryption has negligible effects on I/O throughput and latency.
  • Encryption requires little to no disruption of existing applications and infrastructure.
  • Cryptographic erasure (media sanitization) of data is performed when an internal encrypted drive is removed from the storage system.

CLI and API integration

Advanced management tools, including CLIs and REST APIs, are available for more advanced management of your VSP 5000 series storage environment.

The Command Control Interface (CCI) software provides powerful command-line control for VSP 5000 series. CCI enables you to configure your storage system and perform data management operations by issuing commands directly to the storage system. CCI commands can be used interactively or in scripts to automate and standardize storage administration functions, thereby simplifying storage administration tasks and reducing administration costs. CCI also provides enhanced control and functionality for SVOS RF in-system and remote replication operations, including ShadowImage, Thin Image, TrueCopy, Universal Replicator, and global-active device. For remote replication operations, CCI interfaces with the system software and high-availability (HA) software on the hosts as well as the software on the storage systems to provide failover operation commands that support mutual hot standby in conjunction with industry-standard failover products.

REST-based APIs for VSP 5000 series extend operations, enabling integration with existing toolsets and automation templates to further simplify and consolidate management tasks. For details about API integration solutions for VSP 5000 series, contact your Hitachi Vantara representative.

Storage management software

The Hitachi approach to software-defined solutions enables you to effectively manage your IT infrastructure to align storage resources to rapidly changing business demands, achieve superior returns on infrastructure investments, and minimize operational costs. Hitachi Ops Center, Hitachi's suite of management software for VSP 5000 series, delivers higher storage availability, mobility, and optimization for key business applications and automates storage management operations with integrated best practices to accelerate new resource deployments. Hitachi storage management software enables you to manage more storage capacity with less effort and ensure that service levels for business-critical applications are met while increasing utilization and performance of virtualized storage assets.

The Hitachi Ops Center storage management software for VSP 5000 series includes:

  • Hitachi Ops Center Administrator
  • Hitachi Ops Center Analyzer
  • Hitachi Ops Center Automator
  • Hitachi Data Instance Director

Overview of Ops Center Administrator

Hitachi Ops Center Administrator is a unified software management tool that reduces the complexity of managing storage systems by simplifying the setup, management, and maintenance of storage resources.

Ops Center Administrator reduces infrastructure management complexities and enables a new simplified approach to managing storage infrastructures. It provides intuitive graphical user interfaces and recommended configuration practices to streamline system configurations and storage management operations. You can leverage Ops Center Administrator to easily provision new storage capacity for business applications without requiring in-depth knowledge of the underlying infrastructure resource details. It provides centralized management while reducing the number of steps to configure, optimize, and deploy new infrastructure resources.

Some of the key Ops Center Administrator capabilities include:

  • Simplified user experience for managing infrastructure resources. Visual aids enable easy viewing and interpretation of key management information, such as used and available capacity, and guide features to help quickly determine appropriate next steps for a given management task.
  • Recommended system configurations to speed initial storage system setup and accelerate new infrastructure resource deployments.
  • Integrated configuration workflows with Hitachi recommended practices to streamline storage provisioning and data protection tasks.
  • Common, centralized management for supported storage systems.
  • A REST-based API to provide full management programmability and control.
  • Ops Center Administrator enables automated SAN zoning during volume attach and detach. Optional auto-zoning eliminates the need for repetitive zoning tasks to be performed on the switch.

Hitachi Ops Center Analyzer

Hitachi Ops Center Analyzer is data center management software that monitors, reports, and correlates end-to-end performance from server to storage. With Hitachi Ops Center Analyzer, you can define and monitor storage service-level objectives (SLOs) for resource performance. You can identify and analyze historical performance trends to optimize storage system performance and plan for capacity growth. When a performance hot spot is identified or a service-level threshold is exceeded, the integrated diagnostic engine aids in diagnosing, troubleshooting, and finding the root cause of performance bottlenecks.

Using Ops Center Analyzer, you register resources (storage systems, hosts, servers, and volumes) and set service-level thresholds. You are alerted to threshold violations and possible performance problems (bottlenecks). Using analytics tools, you find which resource has a problem and analyze its cause to help solve the problem.

The following figure shows how Ops Center Analyzer ensures the performance of your storage environment based on real-time SLOs.

The figure shows how Ops Center Analyzer ensures the performance of your storage environment based on real-time SLOs.

The system administrator uses Ops Center Analyzer to manage and monitor the IT infrastructure based on SLOs, which match the service-implementation guidelines that are negotiated under a service-level agreement (SLA) with consumers.

Ops Center Analyzer monitors the health of the IT infrastructure using performance indicators and generates alerts when SLOs are at risk.

Having data center expertise, the service administrator uses Ops Center Analyzer to assign resources, such as VMs and storage capacity from registered storage systems, to consumer applications. This manages critical SLO violations and ensures that service performance meets the SLAs.

Analyzer detail view

Analyzer detail view is the storage performance analytics module for Hitachi Ops Center Analyzer that includes a highly scalable data repository and analytics engine for historical performance and capacity trending across the data center. Analyzer detail view provides deep and granular performance monitoring and reporting to help users in identifying infrastructure bottlenecks and trends order optimize both application and storage system performance. This software enables a common, centralized storage analytics solution for Hitachi and multi-vendor storage environments that reduces the need for vendor-specific performance analytic tools.

Analyzer viewpoint

Analyzer viewpoint is a new add-on module for central enterprise visibility that complements Hitachi Ops Center Analyzer. Analyzer viewpoint periodically collects information about all resources from Ops Center Analyzer servers running at multiple data centers. Using Analyzer viewpoint, you can then easily display and check the comprehensive operational status of data centers around the world in a single window.

Analyzer viewpoint enables you to:

  • Check the overall status of multiple data centers

    Analyzer viewpoint enables you to collectively display and view information about supported resources in the data centers, including large-scale systems consisting of multiple data centers.

  • Analyze problems related to resources

    The Analyzer viewpoint UI displays information about resources in a specific data center in a drill-down view that allows you to easily identify errors. You can then launch the Ops Center Analyzer UI from Analyzer viewpoint, enabling you to quickly perform the tasks needed to resolve the error condition.

Hitachi Ops Center Automator

Hitachi Ops Center Automator provides tools to automate and simplify end-to-end processes, such as storage provisioning, for storage and data center administrators. The building blocks of Ops Center Automator are prepackaged automation templates that you can customize to your specific environment and processes to create services that automate complex tasks such as resource provisioning. Ops Center Automator integrates with other existing Hitachi Ops Center applications, including Hitachi Ops Center API Configuration Manager and Hitachi Ops Center Analyzer, to automate common infrastructure management tasks by using your existing infrastructure services.

The key features of Ops Center Automator include:

  • Automation services for intelligent provisioning of volumes from different storage classes
  • Preconfigured service templates that help you create customized automation services
  • Role-based access to defined services
  • Intelligent pool selection based on an algorithm that chooses the best pools in terms of performance and capacity
  • Assignment of common service management attributes that can be assigned and shared across all automation services
  • Application integration by using a REST API*
  • Infrastructure group creation based on customer needs and environment

* To increase operational simplicity, Hitachi Ops Center deploys a REST API that applications can call to have Ops Center Automator execute tasks for them. The REST API enables third-party tools to integrate with Automator, reducing both effort and risk of errors. Automator is fully integrated with Hitachi Ops Center Analyzer to simplify the monitoring of telemetric data and the required corrections. Automation is accomplished by Analyzer using the REST API to call services created in Automator to address specific issues.

Hitachi Data Instance Director

The enterprise copy data management platform enabled by Hitachi Data Instance Director (HDID) provides business-defined data protection to simplify the creation and management of complex policies to meet service-level objectives for data availability, recoverability, and retention.

HDID provides an orchestration layer for remote replication supporting global-active device, TrueCopy, and Universal Replicator, local and remote snapshots and clones with Thin Image and ShadowImage, continuous data protection, and incremental-forever backup. GUID-154ED335-9904-49FC-A090-45063FC4AFD1-low.png

HDID provides the following benefits:

Operational recovery

HDID offers multiple approaches to meeting operational recovery requirements, allowing business service-level objectives for recovery to be met at optimal cost for differing criticality of data.

  • Storage replication-based operational recovery

    HDID configures, automates, and orchestrates local application-consistent snapshot and clone copies using the local replication capabilities of the Hitachi Virtual Storage Platform family storage systems. This integration provides the ability to create fast, frequent copies of production data, with no impact on the performance of the production system. Very aggressive recovery point objectives (RPO) can be easily achieved for Microsoft Windows platforms for Microsoft Exchange and Microsoft SQL Server, for Oracle database environments on Linux, IBM® AIX®, and Solaris, and for SAP HANA environments on Linux. HDID

    This integration provides the ability to create fast, frequent copies of production data, with no impact on the performance of the production system. Very aggressive recovery point objectives (RPO) can be easily achieved for Microsoft Windows platforms for Microsoft Exchange and Microsoft SQL Server, for Oracle database environments on Linux, IBM® AIX®, and Solaris, and for SAP HANA environments on Linux. HDID also provides storage-based protection of VMware vSphere environments natively for Hitachi Hitachi block storage systems and via Hitachi Virtual Infrastructure Integrator for Hitachi NAS Platform. Both types of vSphere integration allow vSphere administrators to apply protection policies without leaving the vSphere management interfaces. Other applications can also be integrated using the simple scripting interface.

    Storage data snapshots and clones can be mounted and unmounted automatically as part of an HDID policy workflow. They can facilitate access to a current copy of production data for testing and development purposes, or back up to a target device such as a purpose-built backup appliance (PBBA) or tape.

  • Host-based operational recovery

    HDID includes several storage-agnostic technologies for protection of application and file system data. Continuous data protection (CDP) and live backup support Windows environments, with application-specific support for Microsoft Exchange and SQL Server. Batch mode backup is supported on Windows, Linux, IBM® AIX®, and Oracle Solaris systems.

Disaster recovery

HDID provides choices for restoring operations at, or from, another location following a site level outage.

  • Storage-based disaster recovery

    HDID configures and automates global-active device active-active storage cluster, Hitachi TrueCopy® synchronous remote replication and Universal Replicator on block-based systems, and file replication on HNAS, to provide a copy of data in another location. HDID can also orchestrate application-aware snapshots of these remote replicas.

Long-term retention

The governance copy services allow you to back up file data to Hitachi Content Platform (HCP) for Windows, Linux, IBM® AIX®, and Oracle Solaris systems. Unlike other data protection products, HDID places data in its original format; meaning that it can be read without HDID, which allows you to support corporate and regulatory data retention requirements. Because the data is readable, it is indexable with tools such as Hitachi Content Intelligence and can be used for analytics with tools such as Pentaho Data Integration.

Unified management

One of the many benefits of HDID is its single-footprint platform. It enables you to layer, combine, and orchestrate backup, CDP, snapshots, and replication, along with access control and retention policies, to achieve the specific workflows and service levels each application requires.

The simple and easy-to-use graphical user interface (UI) incorporates a powerful policy builder that resembles laying out business processes on a whiteboard. Using the UI, you can easily create and change policies as needed, visualize data copy and movement processes, and align them with business management processes.

Additional features of HDID include:

  • Block-level, incremental-forever data capture dramatically reduces the storage capacity needed for copy data, as compared to traditional full and incremental methods.
  • Supports a range of storage repositories, including block, file, and object.
  • Scales seamlessly to manage petabytes of data.