Skip to main content

We've Moved!

Product Documentation has moved to
Hitachi Vantara Knowledge

Software solution examples

The management software for the Hitachi VSP storage systems enables you to increase operational efficiency, optimize availability, and meet critical business requirements.

Enabling simple and efficient storage provisioning and unified management with Command Suite

Today, financial institutions provide a wide array of services to their customers. These services must support both structured data (online and ATM transactions, such as withdrawing or depositing checks and cash) and unstructured data (such as email messages, SMS text messages, customer feedback, bank statements, and electronic forms). To meet the ever-increasing need for customer access to the services, the institutions must have a solution that meets the following needs:

  • Ability to process customer transactions quickly and accurately. At the same time, provide access to online reports (such as account statements) and forms (such as for opening a new bank account or for applying for a mortgage).
  • Flexibility to accommodate structured and unstructured data, and ability to access services no matter where the storage system resides.
  • Centralized management of all storage repositories to reduce storage management costs and total cost of ownership.

Overall, financial institutions require a platform with the breadth and flexibility to provide services wherever, whenever, and however customers need them.


Hitachi Command Suite (HCS) software consolidates block and file storage arrays to unify the management of all types of data, and provides a single, integrated view for all customers.


HCS natively discovers Hitachi storage systems, Hitachi NAS systems, and Hitachi Data Ingestor file appliance-based systems, displaying the correlation of File Module system drives with back-end physical volumes and File Module storage pools.


HCS discovers and displays related file systems, mount points, and share information for CIFS, and export information for NFS systems. It unifies block, file, and content data across all Hitachi storage and manages all virtualized heterogeneous storage assets.

HCS natively provisions storage to an HNAS cluster the same way as to a physical or hypervisor server, such as the VMware ESX server. It creates and manages file systems, CIFS shares, and NFS exports using the unified, common GUI. Reaching across file, block, content, and application environments, HCS improves business application availability and performance, and expedites access to critical data.

Ensuring optimal storage performance and business application service levels with analytics

Banks offer several incentives to their customers. One such incentive is online banking, which customers have come to prefer. They see the need and growing importance of creating an excellent experience for their online customers. They must provide quick, 24/7 access to online banking services, and must do so across the many devices and platforms used by customers. Customers expect access to these services anytime and from anywhere. If the service is not fast, not available 24/7, and not consistent, customer loyalty can be negatively affected, resulting in bank account closures.

ATM machines provide another critical service to bank customers. ATM transactions have become an essential component of the banking industry. The problem is when ATM machines are not functioning.

Banks strive to keep their business-critical services available for customers, but often find the following problems still exist:

  • Lack of performance baselines or benchmarks to analyze response time for online banking and ATM applications
  • Insufficient root cause analysis (RCA) techniques that look deep into application performance problems, and ineffective existing techniques
  • Absence of real-time monitoring capability and analysis of all elements in the customer environment
  • No tools to help storage administrators analyze application performance or to determine if the storage is at fault
  • Lack of custom reporting capabilities to obtain detailed storage capacity and performance metrics to gain insight into key storage system performance indicators
  • Uncertainty whether critical business applications are meeting required storage service levels

Use Hitachi Command Suite Analytics to monitor performance and meet storage service-level needs.

  • To help banks determine how well their online banking service is performing, they must know the current level of performance and benchmark it against an industry best practice. Storage downtime affects system availability for online transactions. One of the best ways to avoid bottlenecks is through regular monitoring, system feedback, and on-demand customizable reporting based on parameters defined by users.

    The parameters can be based on storage or files, such as EVS, FS, and VVOL utilization, and on capacity reporting, such as on tiers, users, and groups. Instead of reacting to bottlenecks after they occur, administrators can get alerts from HCS Analytics about potential bottlenecks before they occur. Administrators can identify problem performance trends at an earlier stage to avoid system downtime.


    HCS Analytics performs end-to-end performance monitoring along the application's entire data path to quickly determine if storage is the source of application-performance degradation. With this monitoring information, storage administrators can take appropriate measures to remove upcoming bottlenecks and to improve storage (and ultimately application) performance.

  • To ensure that critical business applications are meeting required storage service levels and comply with storage service-level requirements, storage administrators can use HCS Analytics to accurately monitor application storage levels and quickly resolve problems. Applications have varying service-level objectives (SLO) based on their business criticality. For important applications, such as online banking and ATM transactions, storage administrators can use HCS Analytics to provide the applications with appropriate storage resources in compliance with defined SLO requirements.
Management software

To ensure business application performance and predictive growth, Hitachi Command Suite Analytics provides all the necessary capabilities to find storage resource trouble spots, identify the actual affected storage resources, and help determine the root cause of problems.

HCS Analytics features Tuning Manager: Hitachi Tuning Manager provides comprehensive storage performance monitoring required to maximize both business application and Hitachi storage system performance. It provides integrated performance analytics that can quickly identify, isolate, and find possible causes of performance bottlenecks. Within the HCS central management console, the integrated analytics capabilities provide the necessary first step to quickly address performance problems associated with Hitachi storage environments.

If additional performance details or diagnosis is required, Tuning Manager includes a web-based interface to provide deeper performance monitoring across a comprehensive range of performance and capacity metrics, with historical trending and custom reporting capabilities.

Maximizing business application performance and availability with data mobility

Customer service is a top priority for major commercial and retail banks. They strive to maintain good relationships with, and retain current customers as well as attract new ones. They would also like to achieve faster response times for customer transactions involving personal banking or credit cards, and for potential customers inquiring about their services.

In addition to ensuring the timeliness of critical transactions, banks must provide customers with effective processing of mortgage applications from inception to closing.

Banks must optimize the cost of maintaining data gathered from numerous mortgage applications. While users can tolerate slightly slower response times that are required for transactional systems, they are quickly frustrated by consistently slow responses. In a fast-paced business, older and closed mortgage applications lose business relevance quickly, so it does not make sense to store them on fast storage. A lower tier of storage can be used to achieve effective, long-term archiving of inactive data (such as closed or inactive mortgage applications that companies maintain largely in response to legal requirements).


A Hitachi Dynamic Tiering (HDT) pool is added to a storage system to support mortgage applications. Using Hitachi Command Suite Mobility, a custom policy is applied to the volumes in the HDT pool that supports the mortgage applications.

The policy is set to ensure that infrequently or never accessed mortgage applications are placed on the lowest cost storage, reducing the total cost of ownership. Conversely, the newest and still-active mortgage applications are promoted to the fastest tier and get the fastest response time.

Management software

To optimize data access and application Quality of Service, Hitachi Command Suite Data Mobility software places data wherever and whenever it is needed. HCS Data Mobility features Dynamic Tiering, Tiered Storage Manager, and the file-tiering capabilities of the storage system.

  • Hitachi Dynamic Tiering automates data lifecycle management at a low cost while delivering top-tier performance to the information most frequently accessed by the business. HDT manages the tiering dynamically. It monitors and manages space utilization at the page level rather than at the file or dataset level. This means that only frequently referenced parts of a file or dataset reside on the highest tier of storage, minimizing the amount of tier 0 storage required for the highly referenced data.

    HDT identifies hot spots of frequent access and moves them to the highest tier of storage to improve storage performance. It also moves less frequently referenced pages to lower tiers of storage. All of this occurs with complete transparency to the application.

  • Hitachi Tiered Storage Manager (HTSM) proactively matches application performance and availability needs to storage attributes for optimal placement.
  • Intelligent file tiering improves performance in file-sharing environments by automatically separating metadata from user data, placing metadata on the fastest storage tier for improved response times, while keeping user data on less expensive storage tiers.

Delivering storage infrastructure as a service through automated workflows

Financial institutions must provide services 24/7, with almost zero tolerance for outages and inaccessibility to data and information. Storage provisioning plays an integral part in data management. Organizations need to control the complexities associated with storage management and balance operational efficiency. A positive customer experience depends on how the data center is controlled and managed and on the ability to deliver applications in a consistent and timely manner. However, to achieve this objective, customers require a solution to alleviate these pain points:

  • Manual storage provisioning processes, which can lead to human errors. Studies show that more than 40% of outages in a storage environment are caused by human error.
  • Time-consuming operational inefficiencies
  • Cost-inefficient storage provisioning, which can waste storage resources
  • A requirement to know infrastructure and environmental details, which allows for no abstraction
  • A requirement to manually analyze performance and capacity without any built-in intelligence or automation

Hitachi Automation Director automates manual storage provisioning processes and provides application-based provisioning services that require minimal user input and that intelligently leverage infrastructure resources. Hitachi Automation Director provides the following solutions to alleviate the pain points that customers experience in the current environment:

  • Implements intelligent automation workflows to streamline the storage provisioning process.
  • Provides a catalog of predefined service templates and plugin components that incorporate Hitachi best practices in storage provisioning and that minimize human error.
  • Provides customizable storage service templates requiring minimal input that administrative users can use to increase operational efficiency.
  • Optimizes storage configurations for common business applications such as Oracle, Microsoft Exchange, Microsoft SQL Server® and hypervisors such as Microsoft Hyper-V and VMware.
  • Analyzes current storage pool capacity utilization and performance to automatically determine the optimized location for new storage capacity requests and to make storage provisioning more cost-efficient.
Management software

Hitachi Automation Director offers a web-based portal and includes a catalog of predefined workflows that are based on best practices for various applications. These workflows take into account infrastructure requirements for specific applications, including the appropriate storage tier. Capturing the provisioning process with predefined requirements in the workflow, a storage administrator can repeatedly provision infrastructure with simple requests.

After information for provisioning is submitted, the Automation Director intelligent engine matches the request with the appropriate infrastructure based on performance and capacity analysis. Hitachi Automation Director expedites the provisioning process and enables smarter data center management. It provides a REST-based API to integrate provisioning workflows into existing IT management operation applications.Card View

Hitachi Automation Director includes a comprehensive tool, Service Builder, to create and modify existing workflows and plug-in components that automate the storage management tasks for a given operating environment. Service Builder

Hitachi Automation Director supports all native block storage systems and 3rd-party storage systems through virtualization technology.

Data protection for business-critical Oracle databases

Data protection and recovery operations are cited by most customers as one of their top three IT-related challenges. Meanwhile, traditional solutions cannot keep up with rampant data growth, increasing complexity, and distribution of infrastructure. Tighter data availability service-level requirements (backup window, recovery point objective, and recovery time objective) create an impossible situation for line of business owners.

The simple truth is that backup is broken in certain highly important areas, including critical 24x7 applications with large databases.

The business demands that critical data is protected with little or no data loss and with minimal or no performance or availability impact while the data protection occurs.


Hitachi Thin Image (HTI) provides fast copies of the production data and Hitachi Universal Replicator (HUR) ensures that there is an asynchronous copy of the data on another storage system in a distant location. Hitachi Data Instance Director (HDID) orchestrates the HTI and HUR data protection activities through a business-objective-driven, whiteboard-like graphical interface, and ensures application consistency for both local and remote snapshots.

The HDID policy is defined in terms of recovery point objectives (RPO) and retention so that new application-aware snapshots are taken to meet each RPO and deleted after the retention period.

Management software

Hitachi Data Instance Director (HDID) combines modern data protection with business-defined copy data management, simplifying the creation and management of complex data protection and retention workflows.

For simplified management, HDID provides a powerful, easy to use workflow-based policy engine, so that you can define a data protection workflow within 10 minutes:

  • Service Level Agreement (SLA)-driven Policy enables administrators to define the data classification (such as SQL Server or Oracle), data protection operations, and required SLAs (RPO, data retention).
  • Whiteboard-style Data Flow enables the administrator to define the copy destinations and assign policies to them using drag-and-drop operations. The topological view helps the administrator to visualize the data protection processes and align them with the management requirements.

HDID whiteboard-style data flow

You can use different methods to back up data across multiple sites, as described in the following table and figure.

Method Description
Identical snapshots and clones Provide identical RPO and data retention regardless of location. Keeping identical backups provides identical recovery options and procedures during a site failover, which simplifies the entire restore process.
Unique snapshots and clones Provide flexible RPO and data retention based on differing business requirements between normal operation and a site failover. Keeping independent backups enables shorter RPOs and lower retention to be set on the local site for quick recovery, while protecting data longer on the remote site.

RPO snapshots and clones

End-to-end performance troubleshooting using Infrastructure Analytics Advisor

Infrastructure Analytics Advisor provides analytical diagnostics to quickly identify, isolate, and determine the root cause of problems.

The traditional approach of troubleshooting performance problems in the unified infrastructure poses several challenges. For example, it can be difficult to identify performance problem in a storage infrastructure environment that includes various virtual machines, servers, network, and storage. The customers are challenged to accurately monitor storage performance and ensure service levels objectives are met, reduce efforts to troubleshoot performance hot spots and efficiently report across a heterogeneous storage environment.


Infrastructure Analytics Advisor offers an out-of-the-box analytics solution which lets you identify and troubleshoot performance problems at the component level. The topology view lets you view the graphical representation of the infrastructure components and their dependencies, which is crucial for troubleshooting the infrastructure performance problems. The troubleshooting aids helps in efficient root cause analysis. The analytics workflow is as follows:


Workflow tasks
  • Detect performance problems

    You can view the threshold violations using the Dashboard tab and Events tab. You can configure the system to send email notifications when the threshold values are exceeded. You can also use the search feature in the Analytics tab to find the target resources for performance analysis.

  • Identify performance bottleneck

    The performance degradation in the user resources is caused by performance bottleneck on the server, network, or storage components.

    You can identify the resources causing the bottleneck in any of the following views:

    • E2E view: The E2E topology view provides detailed configuration of the infrastructure resources and lets you view the relationship between the infrastructure components. You can manually analyze the dependencies between the components in your environment and identify the resource causing performance problems. By using the topology maps, you can easily monitor and manage your resources. You can use this view to monitor resources in your data center from applications, virtual machines, server, network to storage. In the topology view, if a resource has an alert associated with it, error indicators display on the resource icons. The color of the indicator corresponds with the severity of the alert.

    • Sparkline view: In the Sparkline view, you can analyze the performance trend graphs of the target resource and the related resources. The Sparkline view displays performance trends for multiple resources in the same pane to enable a quick comparison between different resources. You can show trends of performance metrics of each resource and find the correlation with other resources.
  • Analyze the root cause of the bottleneck

    Infrastructure Analytics Advisor integrated troubleshooting aids provide guidance about how to find the root-cause of the performance problems. The root cause can be due to the resource contention issues in the shared infrastructure, or due to configuration changes in the environment.

    • Identify affected resources: You can identify the consumers, hosts, VMs and volumes that use the bottleneck candidate. You can also verify the status of each resource. Based on the severity level displayed, you can troubleshoot the performance problems associated with the resources.

    • Analyze shared resources: : The performance problem arises in the shared infrastructure when an application or a resource uses the majority of the available resources and causes performance problems for other resources in the shared infrastructure. Infrastructure Analytics Advisor supports efficient optimization of the shared infrastructure by quickly identifying the resource contention problems.
    • Analyze related changes: The configuration changes can sometimes be the source of the performance problem in your environment. Infrastructure Analytics Advisor supports the tracking of infrastructure configuration changes. Analyzing these changes and correlating them with the performance data lets you determine the effects of configuration changes on the systems performance and behaviour.
    • Check recovery plans: You can view the system generated recovery plans for processor and cache performance bottlenecks.

    For details, see the Infrastructure Analytics Advisor User Guide.

Management software

Infrastructure Analytics Advisor provides comprehensive storage performance management to monitor, report and correlate end-to-end performance from applications through to shared storage resources that is required to optimize both business applications and storage systems. Infrastructure Analytics Advisor reporting capabilities enable you to monitor the infrastructure resources and assess their current performance and utilization. Reporting data provides you the information you need to make informed business decisions and plan for future growth. The advanced diagnostic engine aids in rapidly diagnosing, troubleshooting, and finding the root cause of performance bottlenecks.


Flexible reporting and analysis using Data Center Analytics

In the fast-paced world of online transactions, many companies with global operations have invested in a sophisticated IT infrastructure that provides them a competitive edge. Monitoring and reporting features enable organizations to monitor applications closely and continuously to proactively identify any problems before they manifest into something more severe and requires immediate attention. Whether you are an IT manager for a bank, health care provider, or a government sector, proactive monitoring and reporting are useful in determining the performance trend of your system and addressing ways to improve customer service interactions in advance of customer feedback. To do this thoroughly requires a tool that can help track the health of you system at all hours and display the relevant metrics instantly in a report that you can share with your organization for assessment.

Hitachi Infrastructure Analytics Advisor integrates with Data Center Analytics to provide advanced reporting capability to continuously measure and analyze performance of your monitored resources. The up-to-date visual representation of your system's health enables you to share reports with others. You can create three types of reports:

  • Predefined reports provide high-level details at the application level and also a granular report that shows component-level performance data.
  • Ad-hoc reports enable you to combine related and unrelated metrics of any monitored resource in one report to review the overall performance impact.
  • Custom reports you create with a report builder.

All reports are included in the Reports dock, and are available when you select any storage system object in the storage systems hierarchy. Predefined reports differ based on your selection of the storage system object. An interactive chart and filtering resources enable you to view every detail in any report. You can also filter reports to display the most relevant data, and can print, create a PDF, and export a report to a CSV file.

Overall and granular level reporting using pre-defined reports

Each node in the tree has predefined reports that cover important attributes of a metric to help your analysis of the resource. If you expand and click a node, for example, 609315f7 under Pools in the tree, the performance report displays. In this case, the Pool IOPS Vs. Response Time report displays and it only shows the metrics data for the 609315f7. No data for other Pools appear on the report.


Compare node and metric with ad-hoc reports

On the reports, nodes are resources such as RAID Storage 302c7d0 and RAID Storage 302c6d6, and metrics such as cache usage and write pending rate. You can do a comparison between any nodes or between metrics of a single node or different nodes. In Add Report, type the report name in the field, then add specific metrics by dragging and dropping a node from the tree to either the axis section Y/Left or Y1/Right. The left and right axis boxes display the list of available resources, for example, virtual machines and hosts. Add Reports

If, for example, you want to see a pattern for a storage node between two time periods, you can compare the reports on Storage IOPS to display in one view. Each graph line is color-coded and you can zoom in reports to get a better view.

You can also compare how one metric affects the other metrics. For example, you can create an ad-hoc report that compares IOPS with Response Time. This most commonly used report shows whether an increasing load on the system (IOPS) affects the performance (response time).


To create ad-hoc reports, you can combine the related and unrelated resource metrics and drag and drop the metrics into the report from the specific instances in the tree. For example, you can see the metrics for ports and volumes in one chart at any time. Attributes that are directly related, for example, IOPS and Response Time, usually have a built-in report from the Reports dock. Sometimes, the attributes can be unrelated (or indirect) such as the storage system cache usage from the file system transfer rate on a host can consume most of the storage from the array. You can add unrelated metrics and create a comparison chart.

Custom reports

If the predefined charts and ad-hoc are not sufficient, you can create custom reports by building your own query. The Custom Reports feature is based on the Data Center Analytics query language. This regex-based expressive query language retrieves and filters the data in the Data Center Analytics database.

The Data Center Analytics query language allows complex analysis on the data in real time with constant run-time. The syntax makes it possible to traverse relations, identify the patterns in the data, and establish a comparison between metrics of a single component or multiple nodes.

The Data Center Analytics UI helps you build your custom query in the following three ways:

  • Start with a predefined query and customize it as required.

  • Build the query using the Build Query feature.

  • Write the query directly using Data Center Analytics query language.
build query


  • Was this article helpful?