System management
As an administrator, you play a role in ensuring the continued accessibility and performance of the system. You can use the System Management application, command line, or REST API to manage the system.
Your responsibilities for administering the system include:
•Managing and monitoring system performance and resource usage by configuring how instances are deployed in your infrastructure. For more information, see System scaling.
•Expanding functionality by writing and installing plugins. For information, see Plugins.
•Setting up email notifications. For information, see Creating email notification rules.
•Upgrading the system. For information, see Updating the system.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Setting the system hostname
After installing your system, you need to configure it with the hostname that you've assigned to it in your corporate DNS environment.
System Management application instructions
1.Click on the Configuration panel.
2.Click on Security.
3.On the Settings tab, specify the hostname in the Cluster Hostname field.
4.Click on the Update button.
Related CLI command(s)
editSecuritySettings
For information on running CLI commands, see CLI reference.
Related REST API method(s)
PUT /security/settings
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
System scaling
You manage how the system scales by adding or removing instances to the system and also by specifying which services run on those instances.
Instances
An instance is a server or virtual machine on which the software is running. A system can have either a single instance or multiple instances. Multi-instance systems have a minimum of four instances.
A system with multiple instances maintains higher availability in case of instance failures. Additionally, a system with more instances can run tasks concurrently and can typically process tasks faster than a system with fewer or only one instance.
A multi-instance system has two types of instances: master instances, which run an essential set of services, and non-master instances, which are called workers.
For more information, see Instances.
Services
Each instance runs a configurable set of services, each of which performs a specific function.
In a multi-instance system, services can be distributed across all instances in the system. In a single-instance system, that instance runs all services.
For more information, see Services.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Networking
This topic describes the network usage and requirements for both system instances and services.
Note: You can configure the network settings for each service when you install the system. You cannot change these settings after the system is up and running. |
Note: If your networking environment changes such that the system can no longer function with its current networking configuration, you need to reinstall the system. See Handling network changes. |
All instance IP addresses must be static. This includes both internal and external network IP addresses, if applicable to your system.
Important: If the IP address of any instance changes, see Handling network changes. |
Each of the Product services can bind to one type of network, either internal or external, for receiving incoming traffic. If your network infrastructure supports having two networks, you many want to isolate the traffic for most system services to a secured internal network that has limited access. You can then leave only the System-Management-application on your external network for user access.
You can use either a single network type for all services or a mix of both types. If you want to use both types, every instance in your system must be addressable by two IP addresses; one on your internal network, and one on your external network. If you use only one network type, each instance needs only one IP address.
Regardless of whether you're using a single network type or a mix of types, you need to configure your network environment to ensure that all instances have outgoing access to the external resources you want to use, such as:
•The data sources where your data is stored
•Identity providers for user authentication
•Email servers that you want to use for sending email notifications
Each service binds to a number of ports for receiving incoming traffic.
Before installing system, you can configure the services to use different ports, or use the default values shown in the tables below.
External ports
The following table contains information about the service ports that users use to interact with the system.
On every instance in the system, each of these ports:
•Must be accessible from any network that requires administrative or search access to the system
•Must be accessible from every other instance in the system
Note: Debuggings are accessible only when debug is set to true in /<installation-directory>/config/cluster.config |
Default Port Value | Used by Service | Purpose |
---|---|---|
80 | S3 Gateway | HTTP communication |
443 (S3 port) |
S3 Gateway |
HTTPS communication |
8000 | System Management application | System Management application GUI |
9099 |
MAPI Gateway |
Object Storage Management application GUI |
9190 | OAuth | OAuth port |
12500 | Metadata Gateway Raft | RPC communication |
12501 | Metadata Gateway | RPC communication |
12510 | Metadata Coordination | RPC communication |
14268 | Tracing Collector | HTTP port |
16686 | Tracing Query | HTTP port (APIs and user interface) |
Internal ports
This table lists the ports used for intrasystem communication by the services. On every instance in the system, each of these ports:
•Must be accessible from every other instance in the system
•Should not be accessible from outside the system
You can find more information about how these ports are used in the documentation for the third-party software underlying each service. See hcpcsServiceTypes.
Default Port Value | Used By | Purpose |
---|---|---|
2181 | Synchronization | Primary port used to communicate with the service |
2888 | Synchronization | Server-server communication |
3888 | Synchronization | Leader elections |
5000 | Synchronization | Debugging |
5001 | System Management application | Debugging |
5004 | Watchdog | Debugging |
5007 | Sentinel | Debugging |
5050 | Cluster Coordination | Primary port used to communicate with the service |
5051 | Cluster Worker | Primary port used to communicate with the service |
5555 | Watchdog | Primary port used for inter-service communication |
5778 | Tracing Agent | Agent HTTP port |
6831 | Tracing Agent | UDP port |
7000 | Cassandra | TCP port for commands and data |
7199 | Cassandra | Used for JMX connections |
7203 | Kafka | Used for JMX connections |
8005 | System Management application | Tomcat shutdown port |
8007 | Sentinel | Tomcat shutdown port |
8022 | Watchdog | SSH |
8080 | Service Deployment | Primary port used to communicate with the service |
8081 | Chronos | Primary port used to communicate with the service |
8889 | Sentinel | Primary port used to communicate with the service |
9042 | Cassandra | Primary port used to communicate with the service |
9091 | Network Proxy | Primary port used to communicate with the service |
9092 | Kafka | Primary port used to communicate with the service |
9191 | Metrics | Primary port used to communicate with the service |
9200 | Elasticsearch | Used to communicate with Elasticsearch cluster |
9201 | Elasticsearch | Used to communicate with Elasticsearch nodes |
9301 | Elasticsearch | Elasticsearch intercluster communication |
9600 | Logstash | Primary port used to communicate with the service |
9601 | Logstash | Port used to listen for syslog connections |
9750 | S3 Gateway | Support |
9751 | Metadata Gateway | Support |
9752 | MAPI Gateway | Support |
9753 | Metadata Cache | Support |
9758 | Metadata Policy Engine | Support |
9760 | Metadata Coordination | Support |
9990 |
S3 Gateway |
Remote monitoring |
9991 | Metadata Gateway | Monitoring |
9992 | MAPI Gateway | Monitoring |
9993 |
Metadata Cache |
Monitoring |
9998 | Metadata Policy Engine | Monitoring |
10000 | Metadata Coordination | Monitoring |
12000 | S3 Gateway | Debugging |
12001 | Metadata Gateway | Debugging |
12002 | MAPI Gateway | Debugging |
12003 | Metadata Cache |
Debugging |
12004 | Metrics | Debugging |
12005 | Tracing Collector | Debugging |
12006 | Tracing Query | Debugging |
12007 | Tracing Agent | Debugging |
12008 | Metadata Policy Engine | Debugging |
12010 | Metadata Coordination | Debugging |
13300 | Metadata Cache | Cache TCP discovery |
13370 | S3 Gateway | Cache TCP communication |
13371 | Metadata Gateway | Cache TCP communication |
13372 | MAPI Gateway | Cache TCP communication |
13373 | Metadata Cache | Cache TCP communication |
13378 | Metadata Policy Engine | Cache TCP communication |
13380 | Metadata Coordination | Cache TCP communication |
13453 | Metadata Cache | Cache TCP communication |
13500 | S3 Gateway | Cache client connector |
13501 | Metadata Gateway | Cache client connector |
13502 | MAPI Gateway | Cache client connector |
13503 | Metadata Cache | Cache client connector |
13508 | Metadata Policy Engine | Cache client connector |
13510 | Metadata Coordination | Cache client connector |
14267 | Tracing Collector | Collecting thrift spans from tracing agents |
15050 | Cluster Coordination | Local port to which the service directly binds |
18000 | System Management application | Local port to which the service directly binds |
18080 | Service Deployment | Local port to which the service directly binds |
18889 | Sentinel | Local port to which the service directly binds |
31000 to 34000 | Service Deployment | Port range used by both Service Deployment and Docker for running containers |
47000 | Cache | TCP cache communication |
47008 | Metadata Policy Engine | TCP cache communication |
47500 | Cache | TCP cache discovery |
48000 | Cache | TCP connector |
48500 | Cache | Client connector |
48508 | Metadata Policy Engine | Client connector |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Handling network changes
Once your system is deployed, its network infrastructure and configuration should not change. Specifically:
•All instance IP addresses should not change
•All services should continue to use the same ports
•All services and instances should continue to use the same network types
If any of the above change, you will need to reinstall the system.
If you need to change the IP addresses for one or more instances in the system, use this procedure to manually change the IP addresses without risk of data loss.
For each instance whose IP address you need to change:
1.Move all services off of the instance. Distribute those services among all the other instances. For information, see Moving and scaling services.
2.On the instance from step 1, stop the run script using whatever tool or process you used to run it. For example, with systemd, run:
systemctl stop <service-name>
3.Remove the instance from the system. For information, see Removing instances.
4.Delete the installation directory from the instance.
5.Add the instance back to the system. For information, see Adding new instances.
If a network infrastructure or configuration change occurs that prevents your system from functioning with its current network settings, you need to reinstall all instances in the system.
1.If the System Management application is accessible, back up your system components by exporting a package. For information, see Exporting packages.
2.On each instance in the system:
a.Navigate to the installation directory.
b.Stop the run script using whatever tool or process you used to run it. For example, with systemd, run:
systemctl stop <service-name>
c.Run bin/stop
d.Run the setup script, including the list of master instances:
sudo bin/setup -i <ip-address-for-this-instance> -m <comma-separated-list-of-master-instance-IP-addresses>
e.Run the run script using whatever methods you usually use to run scripts.
3.Log into System Management application and use the wizard to setup the system.
4.After the system has been setup, upload your package. For information, see Importing packages.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Volumes
Volumes are properties of services and job types that specify where and how a services and individual jobs store their data.
You can use volumes to configure jobs and services to store their data in external storage systems, outside of the system instances. This allows data to be more easily backed up or migrated.
Volumes can also allow services or jobs to store different types of data in different locations. For example, a service may use two separate volumes, one for storing its logs and the other for storing all other data.
Note: Some functions described here are not used with HCP for cloud scale. They are not visible in the System Management application, or have no effect when used. |
In this example, service A runs on instance 101. The service's Log volume stores data in a directory on the system instance and the service's Data volume stores data in an NFS mount.
Volumes are separated into these groups, depending on how they are created and managed:
•System-managed volumes are created and managed by the system. When you deploy the system, you can specify the volume driver and options that the system should use when creating these volumes.
Once the system is deployed, you cannot change the configuration settings for these volumes.
•User-managed volumes can be added to services and job types after the system has been deployed. These are volumes that you manage; you need to create them on your system instances before you can configure a job or service to use them.
Note: As of release 1.3.0, none of the built-in services support adding user-managed volumes. |
For more information, see Adding volumes to jobs.
When configuring a volume, you specify the volume driver that it should use. The volume driver determines how and where data is stored.
Because services and jobs run in Docker containers on instances in the system, volume drivers are provided by Docker and other third-party developers, not by the system itself. For information on volume drivers you can use, see the applicable Docker or third-party developer's documentation.
By default, all services and jobs do not use volume drivers but instead use the bind-mount setting. With this setting, data for each service or job is stored within the system installation directory on each instance where the service runs.
For more information on volume drivers, see the Docker documentation.
For more information:
•On Jobs, see Jobs.
•On services, see Services.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Viewing volumes
The Administration App shows this information about the Docker volumes used by jobs and services:
•Name — The unique identifier for the volume.
•Type — Either of these:
oSystem — The volume is managed automatically for you by the system.
oUser — You need to manage the volume yourself.
•Capacity — Total storage space available in the volume.
•Used — Space used by the job or service.
•Pool — The volume category, as defined by the service or job that uses the volume.
Note: Some functions described here are not used with HCP for cloud scale. They are not visible in the System Management application, or have no effect when used. |
For each volume, you can also view this information about the volume driver that controls how the volume stores data:
•Volume driver — The name of the volume driver.
•Option/Value — The command-line options used to create the volume, and their corresponding values. The available options and valid values for those options are determined by the volume driver.
To view the volumes being used by a job:
1.In the System Management application, click on the Jobs panel.
2.On the Job Type page, click on the job you want.
3.Click on the Volumes tab.
To view the volumes being used by a service:
1.In the System Management application, click on the Services panel.
2.Click on the service you want.
3.Click on the Volumes tab.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Instances
A system is made up of one or more instances of the software. This section includes information on adding and removing instances to the system.
For more information on instances, see System scaling.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
About master and worker instances
Master instances are special instances that run an essential set of services, including:
•System-Management-application service
•Cluster-Coordination service
•Synchronization service
•Service-Deployment service
Non-master instances are called workers. Workers can run any services except for those listed above. For information on services, see Services.
Single-instance systems have one master instance while multi-instance systems have either one or three master instances.
Important: You cannot add master instances to a system after it's installed. You can, however, add any number of worker instances. |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Single-instance systems versus multi-instance systems
A system can have a single instance or can have multiple instances (four or more).
Note: Every instance must meet the minimum RAM, CPU, and disk space requirements. For information, see Hardware resources. |
One instance
A single-instance system is useful for testing and demonstration purposes. It requires only a single server or virtual machine and can perform all product functionality.
However, a single-instance system has these drawbacks:
•It has a single point of failure. If the instance hardware fails, you lose access to the system.
•With no additional instances, you cannot choose where to run services. All services run on that one instance.
Multiple instances
A multi-instance system is suitable for use in a production environment because it offers these advantages over a single-instance system:
•You can control how services are distributed across the multiple instances, providing improved service redundancy, scale out, and availability.
For more information, see Service list.
• A multi-instance system can survive instance outages. For example, with a four-instance system running the default distribution of services, the system can lose one instance and still remain available.
•Performance is improved as work can be performed in parallel across instances.
•You can add additional instances to the system at any time.
Note: You cannot change a single-instance system into a production-ready multi-instance system by adding new instances. This is because you cannot add master instances. Master instances are special instances that run a particular set of system services. Single instance systems have one master instance. Multi-instance systems have three. By adding additional instances to a single-instance system, your system still has only one master instance, meaning there is a single point of failure for the essential services that only a master instance can run. For more information, see Adding new instances. |
Three-instance system considerations
Three-instance systems should have only a single master instance. If you deploy a three-instance system where all three instances are masters, the system may not have enough resources to do much beyond running the master services. For information on master instances, see About master and worker instances.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Requirements for running system instances
This section lists the hardware and operating system requirements for running system instances. Also see Networking for information on network requirements for both instances and services.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Hardware resources
To install system on on-premises hardware for production use, you must provision at least four instances (nodes) with sufficient CPU, RAM, disk space, and networking capabilities. This table shows the minimum and recommended hardware requirements for each instance in an system system.
Resource |
Minimum |
Recommended |
---|---|---|
RAM |
32 GB |
128 GB |
CPU |
8-core |
24-core |
Available disk space |
500 GB 10k SAS RAID |
2000 GB 15k SAS RAID |
Network interface controller (NIC) | (1) 10 Gb Ethernet | (2) 10 Gb Ethernet |
IP addresses | (1) static | (2) static |
Firewall Port Access |
Port 443 for S3 API
Port 8000 for System Management application GUI Port 9084 for MAPI and Storage Management App GUI |
Same |
Internal IP Ports | See Networking | Same |
Network Time | IP address of time service (NTP) | Same |
Important: Each instance uses all available RAM and CPU resources on the server or virtual machine on which it's installed. |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Operating system and Docker requirements
To be a system instance, each server or virtual machine you provide:
•Must run a 64-bit Linux distribution
•Must have Docker version 1.13.1 or later installed
Important: Install the current Docker version suggested by your operating system, unless that version is earlier than 1.13.1. The system cannot run with Docker versions prior to 1.13.1. |
This table shows the operating systems and Docker and SELinux configurations that system has been qualified with. For more information, see Docker considerations and SELinux considerations.
Operating System | Docker Version | Docker Storage Configuration | SELinux setting |
---|---|---|---|
Fedora 27 |
Docker 1.13.1-58.git87f2fab.el7.x86_64 |
direct-lvm |
Enforcing |
Red Hat Enterprise Linux 7.4 |
Docker 1.13.1-58.git87f2fab.el7.x86_64 |
direct-lvm |
Enforcing |
Ubuntu 16.04-LTS |
Docker 17.03.0-ce |
aufs | N/A |
CentOS 7.4 |
Docker 18.03.1-ce |
overlay2 | Enforcing |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Docker considerations
•The Docker installation directory on each instance must have at least 20 GB available for storing the system Docker images.
•Make sure that the Docker storage driver is configured correctly on each instance before installing the system.
After installing the system, changing the Docker storage driver requires a reinstallation of the system.
To view the current Docker storage driver on an instance, run:
docker info
•If you want to enable SELinux on the system instances, you need to use a Docker storage driver that supports it. The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.
•If you are using the Docker devicemapper storage driver:
oMake sure that there's at least 40 MB of Docker metadata storage space available on each instance. The system requires 20 MB to install successfully and an additional 20 MB to successfully update to a later version.
To view Docker metadata storage usage on an instance, run:
docker info
oOn a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, the system may not have enough space to run.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
SELinux considerations
•You should decide whether you want to run SELinux on the new instance. Then, enable or disable it before installing system on the instance.
Enabling or disabling SELinux on an instance requires you to reboot the instance.
To view whether SELinux is enabled on an instance, run:
sestatus
•If you want to enable SELinux on the system instances, you need to use a Docker storage driver that supports it.
The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Supported browsers
•Google Chrome latest
•Mozilla Firefox latest
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Time source requirements
If you are installing a multi-instance system, each instance should run NTP (network time protocol) and use the same external time source. For information, see support.ntp.org.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Adding new instances
You may want to add additional instances to the system if:
•You want to improve system performance.
•You are running out of disk space on one or more instances.
Important: •You cannot add new master instances, only new worker instances. |
However, these situations may also be improved by adding additional CPU, RAM, or disks to the instances you already have. For guidance, see Hardware resources.
•Obtain the product installation file. When adding an instance, you unpack and deploy this file on a bare-metal server or a pre-existing Linux virtual machine.
•Record the IP address(es) of at least one of the master instances in the system.
If your system uses internal and external networks, you need to record both the internal and external IP addresses for the master instances.
You can view instance IP addresses on the Instances page in the System Management application.
•Ensure that the new instances you are adding meet the minimum hardware, OS, and networking requirements. For information, see Requirements for running system instances.
•Record the Docker volume drivers currently used by services and jobs across all existing instances. You need to install all of these volume drivers on the new instance that you're adding.
To find the volume drivers currently in use by your system, run this command on each system instance:
docker volume ls
Take note of each value for the DRIVER field.
On each server or virtual machine that is to be a system instance:
1.In a terminal window, check whether Docker 1.13.1 or later is installed:
docker --version
2.If Docker is not installed or if you have a version prior to 1.13.1, install the current Docker version suggested by your operating system.
The installation method you use depends on your operating system. See the Docker website for instructions.
Configure Docker with settings suitable for your environment. For guidance on configuring and running Docker, see the applicable Docker documentation.
Important: •The Docker installation directory on each instance must have at least 20 GB available for storing the system Docker images. •Make sure that the Docker storage driver is configured correctly on each instance before installing the system. After installing the system, changing the Docker storage driver requires a reinstallation of the system. To view the current Docker storage driver on an instance, run: docker info •If you want to enable SELinux on the system instances, you need to use a Docker storage driver that supports it. The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation. •If you are using the Docker devicemapper storage driver: oMake sure that there's at least 40 MB of Docker metadata storage space available on each instance. The system requires 20 MB to install successfully and an additional 20 MB to successfully update to a later version. To view Docker metadata storage usage on an instance, run: docker info oOn a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, the system may not have enough space to run. |
If any services or jobs on your system are using Docker volume drivers (that is, not the bind-mount setting) for storing data, you need to install those volume drivers on the new instance that you are adding. If you don't, jobs and services may fail to run on the new instance.
Volume drivers are provided by Docker and other third-party developers, not by the system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.
For more information, see Volumes.
On each server or virtual machine that is to be a system instance, add this line to the /etc/sysctl.conf file:
vm.max_map_count = 262144
If the line already exists, ensure that the value is greater than or equal to 262144.
You should decide whether you want to run SELinux on the new instance. Then, enable or disable selinux on that instance before installing system.
Enabling or disabling SELinux on an instance requires you to reboot the instance.
For information on running system with SELinux enabled, see Operating system and Docker requirements and SELinux considerations.
1.Determine the port values currently used by your system. To do this, on any instance, view the <installation-directory>/config/network.config file.
2.On each server or virtual machine that is to be a system instance:
a.Edit the firewall rules to allow communication over all network ports that your system currently uses. You do this using a tool such as firewalld.
b.Restart the server or virtual machine.
Install NTP (Network Time Protocol) on the new server or virtual machine and configure it to use the same time source as the other system instances. For information, see support.ntp.org.
On each server or virtual machine that is to be a system instance, you need to start Docker and keep it running.
You can use whatever tools you typically use for keeping services running in your environment.
For example, to run Docker using systemd:
1.To check if Docker is running, run this command:
systemctl status docker
2.If Docker is not running, start the docker service:
sudo systemctl start docker
3.Optionally, configure the docker service to start automatically when you restart the server or virtual machine:
sudo systemctl enable docker
On each server or virtual machine that is to be a system instance:
1.Retrieve the product installation file and store it in a directory on the server or virtual machine.
2.In the largest disk partition, create a directory. This is the product installation directory.
3.Move the installation package to the product installation directory:
mv <path>/<installation-file> /<path>/<installation-directory>
4.Navigate to the installation directory:
cd /<path>/<installation-directory>
5.Unpack the installation package:
tar -zxvf <installation-file>
6.Run the install script in the version-specific directory:
sudo ./cluster/<version-number>/bin/install
For example:
sudo ./cluster/1.4.0.123/bin/install
Notes: •Don't change directories after running the install script. The following steps are performed in your current directory, not under the version-specific directory. •The install script can be run only once on each instance. You cannot rerun this script to try to repair or upgrade a system instance. |
On each server or virtual machine that is to be a system instance, edit the <installation-directory>/config/network.config file to be identical to the copies of the same file on the existing system instances.
On each server or virtual machine that is to be a system instance, run the setup script with the applicable options to match the network configuration of the existing system:
•-i — The external network IP address for the instance that you're adding.
•-I — The internal network IP address for the instance that you're adding.
•-m — The external IP address for an existing master instance.
•-M — The internal IP address for an existing master instance.
For example, if you're adding an instance to a system that uses both internal and external networks, use this syntax:
sudo bin/setup -i <external-ip-for-new-instance> -I <internal-ip-for-new-instance> -m <list-of-external-master-ips> -M <list-of-internal-master-ips>
Such as:
sudo bin/setup -i 192.0.2.4 -I 10.236.1.0 -m 192.0.2.0,192.0.2.1,192.0.2.3 -M 10.236.1.1,10.236.1.2,10.236.1.3
Important: For the -m and -M options: •Do not specify the IP address for the instance you are trying to add. You cannot add new master instances to an existing system. •You can optionally specify multiple master IP addresses. If you do, don't separate them with spaces. For example: sudo bin/setup -i 192.0.2.4 -m 192.0.2.0,192.0.2.1,192.0.2.3 |
Example
This table shows the example commands used to create a four-instance system. Each command is entered on a different server or virtual machine that is to be a system instance.
The resulting system in this example contains three master instances, one worker instance, and uses both internal and external networks.
Instance internal IP | Instance external IP | Master or worker | Command |
---|---|---|---|
192.0.2.1 | 10.236.1.1 | Master |
|
192.0.2.2 | 10.236.1.2 | Master |
|
192.0.2.3 | 10.236.1.3 | Master | sudo bin/setup -i 192.0.2.3 -I 10.236.1.3 -m 192.0.2.1,192.0.2.2,192.0.2.3 |
192.0.2.4 | 10.236.1.4 | Worker | sudo bin/setup -i 192.0.2.4 -I 10.236.1.4 -m 192.0.2.1,192.0.2.2,192.0.2.3 |
On each virtual machine that is to be a system instance, start the run application script using whatever methods you usually use to run scripts.
Important: Ensure that the method you use can keep the run script running and can automatically restart it in case of a server reboot or other availability event. |
Here are some examples of how you can start the script:
•Example 1 — You could run the script in the foreground:
sudo bin/run
When you run the run script this way, the script does not automatically complete. It remains running in the foreground.
•Example 2 — You could run the script as a service using systemd:
a.Copy the <product-name>.service file to the appropriate location for your OS. For example:
cp /<file-path>/bin/<product-name>.service /etc/systemd/system
b.Enable and start the search service:
sudo systemctl enable <product-name>.service
sudo systemctl start <product-name>.service
Note: When you enable the search service, systemctl may display this message: The unit files have no [Install] section. They are not meant to be enabled using systemctl. Possible reasons for having this kind of units are: 1) A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. 2) A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. 3) A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). Depending on your OS, the search service may or may not have successfully been enabled. To avoid this, make sure that you moved the <product-name>.service to the appropriate location, typically /etc/systemd/system. |
Once the service starts, the server or virtual machine automatically joins the system as a new instance.
Also, depending on how your jobs are configured, jobs may not run on the new instances that you've added. You need to manually configure jobs to run on them. For information, see Configuring where jobs run.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Viewing instances
You can use the System Management application, CLI, and REST API to view a list of all instances in the system.
Related topics:
System Management application instructions
To view all instances, in the System Management application, click on Dashboard > Instances
The page shows all instances in the system. Each instance is identified by its IP address.
This table describes the information shown for each instance.
Property | Description |
---|---|
State |
One of these: •Up — The instance is reachable by other instances in the system. •Down — The instance cannot be reached by other instances in the system. |
Services |
The number of services running on the instance. |
Service Units |
The total number of service units for all services and job types running on the instance, out of the recommended service unit limit for the instance. An instance with a higher number of service units is likely to be more heavily utilized by the system than an instance with a lower number of service units. The Instances page displays a blue bar for instances running less than the recommended service unit limit. The Instances page displays a red bar for instances running more than the recommended service unit limit.
For more information on service units and service unit recommendations, see Service units. |
Load Average |
The load averages for the instance for the past one, five, and ten minutes. |
CPU |
The sum of the percentage utilization for each CPU core in the instance. |
Memory Allocated |
This section shows both: •The amount of RAM on the instance that's allocated to all services running on that instance. •The percentage of this allocated RAM to the total RAM for the instance. |
Memory Total |
The total amount of RAM for the instance. |
Disk Used |
The current amount of disk space that your system is using in the partition on which it is installed. |
Disk Free |
The amount of free disk space in the partition in which your system is installed. |
To view the services running on an individual instance, in the System Management application:
1.Click on Dashboard > Instances.
2.Click on the instance you want.
The page lists all services running on the instance.
For each service, the page shows:
•The service name
•The service state. One of these:
oHealthy — The service is running normally.
oUnconfigured — The service has yet to be configured and deployed.
oDeploying — The system is currently starting or restarting the service. This can happen when:
–You move the service to run on a completely different set of instances.
–You repair a service.
For information on viewing the status service operations, see Monitoring service operations.
oBalancing — The service is running normally, but performing some background maintenance operations.
oUnder-protected — In a multi-instance system, one or more of the instances on which a service is configured to run are offline.
oFailed — The service is not running or the system cannot communicate with the service.
•CPU Usage — The current percentage CPU usage for the service across all instances on which it's running.
•Memory — The current RAM usage for the service across all instances on which it's running.
•Disk Used — The current total amount of disk space that the service is using across all instances on which it's running.
Related CLI command(s)
getInstance
listInstances
For information on running CLI commands, see CLI reference.
Related REST API method(s)
GET /instances
GET /instances/{uuid}
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Removing instances
You would typically remove an instance from your system in these situations:
•You are retiring the hardware on which the instance runs
•The instance is in the Down state and cannot be recovered
•You want to run a system with fewer instances
If the instance has already shut down as a result of a failure, the instance is in the Down state. Your system automatically attempts to move all services from that instance to other instances in the system. After all services have been moved, the instance is eligible for removal. Continue to Step 2: Remove the shut down instance from the system.
If the instance that you want to remove is in the Up state, you need to shut it down yourself before you can remove it from the system.
Procedure:
1.Move all the services that the instance is currently running to the other instances in the system. For information, see Moving and scaling services.
Important: Shutting down an instance without first moving its services can cause data loss. |
2.If the system has jobs configured to run on only the failed instance, configure those jobs to run on other instances. For information, see Configuring where jobs run.
3.Stop the run script from running. You do this using whatever method you're currently using to run the script.
4.Run this command to stop all system Docker containers on the instance:
sudo <installation-directory>/bin/stop
5.Delete the system Docker containers:
a.List all Docker containers:
sudo docker ps
b.Note the container IDs for all containers that use a com.hds.ensemble or com.hitachi.aspen image.
c.Delete each of those containers:
sudo docker rm <container-id>
6.Delete the system Docker images:
a.List all Docker images:
sudo docker images
b.Note the image IDs for all images that use a com.hds.ensemble or com.hitachi.aspen repository.
c.Delete each of those images:
sudo docker rmi <image-id>
7.Delete the system installation directory:
rm -rf /<installation-directory>
System Management application instructions
1.Click on the Instances panel.
2.Click on the instance you want to remove.
3.Click on Remove Instance.
Related REST API method(s)
DELETE /instances/{uuid}
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Replacing a failed instance
If an instance suffers an unrecoverable failure, you need to replace that instance with a new one.
Steps:
1.In the System Management application, view the Instances page to determine whether the failed instance was a master instance.
2.Select a new server or virtual machine to add as a new instance to the system. For information on instance requirements, see Requirements for running system instances.
3.Remove the failed instance from the system. For information, see Removing instances.
WARNING! If the failed instance was a master, after removing it, you have only two master instances remaining. If any other instance fails while you are in this state, the system becomes completely unavailable until you add a third master back to the system by completing this procedure. |
4.Add the replacement instance to the system. For information, see Adding new instances.
Important: If the instance you are replacing was a master instance, when you run setup on the replacement instance, the list of masters that you specify for the -m option needs to include: •The IP addresses of the two remaining healthy master instances. •The IP address of the new instance that you're adding. For example, in a system with master instance IPs ranging from 192.0.2.1 to 192.0.2.3 and you are replacing instance 192.0.2.3 with 192.0.2.5, run setup with these options: sudo bin/setup -i 192.0.2.5 -m 192.0.2.1,192.0.2.2,192.0.2.5 This does not apply when you're replacing a worker instance. In that case, specify the IP addresses of the 3 existing masters. |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Plugins
Plugins are modular pieces of code that allow your system to perform specific activities.
Plugins are organized in groups called plugin bundles. When adding or removing plugins from your system, you work with plugin bundles, not individual plugins.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Viewing installed plugins
Use the System Management application, REST API, and CLI to view all plugin bundles and individual plugins that have been installed. You can view all individual plugins at the same time or filter the list based on plugin type.
System Management application instructions
1.Click on the Configuration panel.
2.Click on Plugins.
The Plugin Bundles tab shows all installed plugin bundles.
3.To view all individual plugins, click on the All Plugins tab.
Related REST API method(s)
GET /plugins
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Upgrading plugin bundles
To upgrade plugins, you upload a new version of the bundle that contains those plugins.
You can select which version of the plugin bundle is the active one (that is, the one that connectors or stages will use). If you select the new version, all connectors and stages immediately begin using the new versions of the plugins in the bundle.
You can change the active plugin bundle version at any time. For information on doing this, see Setting the active plugin bundle version.
System Management application instructions
To install a plugin:
1.Click on the Configuration panel.
2.Click on Plugins.
3.Click on the Upload Bundle button.
4.In the Upload Plugins window, drag and drop the new version of the plugin bundle.
5.In the list of plugin bundles, click on the row for the plugin bundle version that you want.
If the bundle you uploaded isn't listed, click on the Reload Plugins button.
6.Click on the Set Active button.
Related CLI command(s)
uploadPlugin
setPluginBundleActive
Related REST API method(s)
POST /plugins/upload
POST /plugins/bundles/{name}/{bundleVersion}/active
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Setting the active plugin bundle version
If you've uploaded multiple versions of a plugin bundle, only one version can be active at a time. The active plugin bundle version is the one that the system uses.
System Management application instructions
To set the active plugin version:
1.Click on the Configuration panel.
2.Click on Plugins.
3.Click on the row for the plugin bundle version that you want.
4.Click on the Set Active button.
Related CLI command(s)
setPluginBundleActive
For information on running CLI commands, see CLI reference.
Related REST API method(s)
POST /plugins/bundles/{name}/{bundleVersion}/active
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Deleting plugin bundles
To delete plugins from your system, you delete plugin bundles from the system. You cannot delete individual plugins.
You cannot delete a plugin bundle, or any of its versions, if any of that bundle's plugins are currently in use by the system.
System Management application instructions
To delete a plugin bundle:
1.Click on the Configuration panel.
2.Click on Plugins.
3.Click on the delete icon () for the plugin bundle you want to remove.
Related CLI command(s)
deletePluginBundle
For information on running CLI commands, see CLI reference.
Related REST API method(s)
DELETE /plugins/bundles/{name}/{bundleVersion}
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Packages
You can back up all of your system configuration by exporting packages. You can back up these package files and use them to restore your configurations in case of a system failure.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Exporting packages
You can export the configurations for system components as package files. You can back up these package files and use them to restore your configurations in case of a system failure.
After exporting a package, you can store it in one of your data sources. When you want to import the package, your system can retrieve it directly from the data source.
For information on:
• Importing packages, see Importing packages
System Management application instructions
1.Click on the Configuration panel.
2.Click on Packages.
3.Click on Export.
4.Under Customize Package Description, give your package a name and an optional description.
5.Under Configuration, select any configuration items to export.
6.Under Plugins, select any plugin bundles to export.
7.Under Components, select any available components to export.
If you select one component but not the components it depends on, the System Management application prompts you to add those missing components to the package.
8.Under Validate, make sure your package is valid and click on the Download Package button.
9.Once your package downloads, click on the Download Package button to download it again, or click on the Finish button to exit.
Related CLI command(s)
buildPackage
downloadPackage
For information on running CLI commands, see CLI reference.
Related REST API method(s)
POST /package/build
POST /package/download
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Importing packages
To import a package, you can upload it from your computer or have your system retrieve it from one of your data sources. After you import the package, your system runs a system task to synchronize the package components across all instances in your system.
The system can have only one imported package at a time.
Notes: •Importing a component that already exists on your system may cause conflicts and should be avoided. •You need to manually resolve conflicts with Components, while conflicts with Configuration are handled automatically by the system. |
System Management application instructions
1.Click on the Configuration panel.
2.Click on Packages.
3.Click on Import.
4.Do one of these:
oIf the package you want to import is stored on your computer, click and drag the package file into the Upload Package panel.
oIf the package you want to import is stored in one of your data sources, click on the Click to Upload panel. Then, browse for the package file.
5.Under Package Description, review the description and click on the Continue button.
6.Under Configuration, select any configuration items to import.
7.Under Plugins, select any plugin bundles to import.
8.Under Components, select any available components to import.
9.Under Validate, make sure your package is valid and click on the Install Package button.
Your system starts a system task to install the package components on all instances in the system.
You can monitor the task from the current page or from the Processes page.
10.Once the task has completed and all package components have been installed, clicking on the Complete Install button will delete the package from the system.
Related CLI command(s)
uploadPackage
loadPackage — (Loads a package from a data connection)
installPackage
getPackageStatus
deletePackage
For information on running CLI commands, see CLI reference.
Related REST API method(s)
POST /package — (Uploads a package)
POST /package/load — (Loads a package from a data connection)
POST /package/install
GET /package — (Gets the status of the imported package)
DELETE /package
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Setting a login welcome message
You can use the System Management application, REST API, and CLI to set a welcome message for the System Management application. The message appears on the app's login page.
System Management application instructions
1.Click on the Configuration panel.
2.Click on Security.
3.On the Settings tab, type a message in the Single Sign-on Welcome Message field.
4.Click on the Update button.
Related REST API method(s)
PUT /security/settings
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Updating the system
You can update system software by uploading new update packages.
Important: Hitachi Vantara does not provide updates or security fixes for the host operating systems running on system instances. |
In order for a system to be updated:
•All instances and services must be healthy.
•Each service must be running on its recommended number of instances. See Service list.
•Each instance must have enough disk space for the update.
•All required network ports must be available on each instance.
•There can be no in-progress package uploads or installations. See Packages.
•All running jobs are paused.
•System availability considerations:
oInstances shut down and restart one at a time during the upgrade. Other instances remain online and able to service requests.
oThe System Management application remains available but is in a read-only state. You can monitor the progress of the update, but you cannot make any other changes to the system.
Note: Systems with two instances are more susceptible to availability outages during an update than systems with three or more instances. |
As an update runs, you can view its progress on the Configuration > Update page. Also on this page, you can view all system events related to system updates.
After an update, the system runs a new version of the software. Additionally:
•If any of the built-in plugins were updated, your system automatically uses the latest versions of those plugins.
•If an existing service is replaced with a new service, the system automatically runs that new, replacement service.
•If any new services were added, you may need to manually configure those services to run on the system instances.
For information on:
•Plugins, see Plugins
•Configuring where services run, see Moving and scaling services.
If errors occur during an update, the Update page displays information about each error and also displays a Retry button for starting the update over again. Some errors may not be resolved by restarting the update.
If you encounter errors during an update, contact your authorized service provider.
For information on viewing the update history for the system, see Viewing update history.
A system update may add new services or plugins. You need to manually configure your system to start using these new components; your system does not start using them automatically.
System Management application instructions
To update system:
1.Click on the Configuration panel.
2.Click on Update.
3.Click on the Install tab.
4.Click and drag the file into the Upload panel.
The update file is uploaded and system checks to make sure the file is valid. This may take several minutes.
5.On the Update page, click on the View button in the Update Status panel.
6.The Verify & Apply Update page displays information about the contents of the update.
7.To start the update, click on the Apply Update button.
Related CLI command(s)
getUpdateStatus
installUpdate
deleteUpdate
loadUpdate
uploadUpdate
For information on running CLI commands, see CLI reference.
Related REST API method(s)
GET /update
POST /update/install
DELETE /update/package
POST /update/package
POST /update/package/load — (Retrieves update package from a data connection)
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Viewing update history
You can view a list of all updates that have previously been applied to your system. For each update, you can view the corresponding version number and the date on which it was installed.
System Management application instructions
1.Click on the Configuration panel.
2.Click on Update.
The History tab lists previously installed versions and when each was installed.
Related REST API method(s)
GET /update/history
For information on specific REST API methods, in the System Management application, click on the help icon (). Then:
•To view the administrative REST API methods, click on REST API - Admin.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.
Uninstalling the system
To completely uninstall your system, do the following on all instances:
1.Stop the run script from running. You do this using whatever method you're currently using to run the script.
2.Run this command to stop all system Docker containers on the instance:
sudo <installation-directory>/bin/stop
3.Delete the system Docker containers:
a.List all Docker containers:
sudo docker ps
b.Note the container IDs for all containers that use a com.hds.ensemble or com.hitachi.aspen image.
c.Delete each of those containers:
sudo docker rm <container-id>
4.Delete the system Docker images:
a.List all Docker images:
sudo docker images
b.Note the image IDs for all images that use a com.hds.ensemble or com.hitachi.aspen repository.
c.Delete each of those images:
sudo docker rmi <image-id>
5.Delete the system installation directory:
rm -rf /<installation-directory>
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 - 2019 Hitachi Vantara Corporation. All rights reserved.