Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

System management

As an administrator, you play a role in ensuring the continued accessibility and performance of the system. You can use the Admin App, CLI commands, or REST API methods to manage the system.

Your responsibilities for administering the system include:

  • Managing and monitoring system performance and resource usage by configuring how instances are deployed in your infrastructure.
  • Expanding functionality by writing and installing plugins.
  • Setting up email notifications.
  • Upgrading the system.

Setting host name

After installing your system, you need to configure it with the host name assigned to it in your corporate DNS environment.

The host name must be a fully qualified domain name (FQDN) using lowercase letters.

Procedure

  1. Select Dashboard > Configuration.

  2. Click Security.

  3. On the Settings tab, specify the system or cluster host name in the Cluster Hostname field. Type a lowercase ASCII FQDN of as many as 255 characters only from the set a-z, 0-9, hyphen (-), period (.), underscore (_), and tilde (~).

  4. Click Update.

Changing host name

If you change the system or cluster host name, you must update the system certificate and restart the S3 Gateway and MAPI Gateway services for the change to take effect.

Procedure

  1. Select Dashboard > Configuration.

  2. Click Security.

  3. On the Settings tab, change the system or cluster host name in the Cluster Hostname field.

    Type a lowercase ASCII FQDN of as many as 255 characters only from the set a-z, 0-9, hyphen (-), period (.), underscore (_), and tilde (~).
  4. Click Update.

Next steps

After changing the host name, do the following:
  1. Update the system certificate. This applies to the default self-signed certificate as well.
  2. Restart (repair) the S3 Gateway and MAPI Gateway services.
  3. If encryption is enabled, restart (repair) the Key Management Server (KMS) service and unseal the vault.

Related CLI commands

editSecuritySettings

Related REST API methods

PUT /security/settings

You can get help on specific REST API methods for the Admin App at REST API - Admin.

System scaling

You manage how the system scales by adding or removing instances to the system and also by specifying which services run on those instances.

Instances

An instance is a server or virtual machine on which the software is running. A system can have either a single instance or multiple instances. Multi-instance systems have a minimum of four instances.

A system with multiple instances maintains higher availability if instances fail. Additionally, a system with more instances can run tasks concurrently and can typically process tasks faster than a system with fewer or only one instance.

A multi-instance system has two types of instances: master instances, which run an essential set of services, and non-master instances, which are called workers.

Services

Each instance runs a configurable set of services, each of which performs a specific function. For example, the Metadata Gateway service stores metadata persistently.

In a single-instance system, that instance runs all services. In a multi-instance system, services can be distributed across all instances.

Networking

This topic describes the network usage by, and requirements for, both system instances and services.

Note
  • You can configure the network settings for each service when you install the system. You cannot change these settings after the system is up and running.
  • If the networking environment changes such that the system can no longer function with its current networking configuration, you must reinstall the system.
Cluster host name

The HCP for cloud scale cluster host name is configured during installation. The cluster host name is required because it's needed for access to both the HCP for cloud scale user interface and the S3 API.

Instance IP address requirements

All instance IP addresses must be static, including both internal and external network IP addresses if applicable to the system. If you replace an instance, you can reuse its IP address. By doing so you don't have to change DNS entries and you conserve the address.

Network types

Each of the HCP for cloud scale services can bind to one type of network, either internal or external, for receiving incoming traffic. If the network infrastructure supports having two networks, you might want to isolate the traffic for most system services to a secured internal network that has limited access. You can then leave the following services on the external network for user access:

  • Admin-App
  • Message Queue
  • Metadata-Cache
  • Metadata-Coordination
  • Metadata-Gateway
  • Policy-Engine
  • Metrics
  • S3-Gateway
  • Tracing-Agent
  • Tracing-Collector
  • Tracing-Query
  • MAPI-Gateway

You can use either a single network type for all services or a mix of both types. To use both types, every instance in the system must be addressable by two IP addresses, one on the internal network and one on the external network. If you use only one network type, each instance needs only one IP address.

Allowing access to external resources

Regardless of whether you're using a single network type or a mix of types, you must configure the network environment to ensure that all instances have outgoing access to the external resources you want to use, such as:

  • The storage components where the object data is stored
  • Identity providers for user authentication
  • Email servers that you want to use for sending email notifications
Ports

Each service binds to a number of ports for receiving incoming traffic. Port mapping is visible from the Network tab for each service.

Before installing HCP for cloud scale, you can configure services to use different ports, or use the default values shown in the following tables.

The following services must be deployed with their default port values:

  • Message Queue
  • Metadata Cache
  • Tracing Agent
  • Tracing Collector
  • Tracing Query
External ports

The following table contains information about the service ports that users use to interact with the system.

On every instance in the system, each of these ports:

  • Must be accessible from any network that needs administrative or data access to the system
  • Must be accessible from every other instance in the system
Default Port ValueUsed by ServicePurpose
80 (S3 HTTP port, if enabled)S3 GatewayObject persistence and access
443 (S3 HTTPS port)S3 Gateway

S3 Console application

Object persistence and access

Proxied by Network Proxy

8000Admin AppSystem Management application GUI
8443 (S3 HTTPS port)S3 GatewayObject persistence and access

Not proxied by Network Proxy, used by external load balancer

9099

MAPI Gateway

Object Storage Management application GUI

Load balancing

The supported options for load balancing S3 traffic affect performance.

The S3 Gateway service processes S3 traffic and can serve as an SSL termination point. It can listen on port 80, port 443 (the standard SSL port) or port 8443. The Network Proxy service balances the flow of S3 traffic to S3 Gateway instances. The Network Proxy service listens only on port 443. By default, Network Proxy passes S3 SSL traffic through to the S3 Gateway service.

To improve performance, you can configure an external load balancer and bypass Network Proxy. If your load balancer supports SSL termination, you can configure S3 Gateway instances to accept HTTP traffic on port 80.

If you want your load balancer to pass through SSL S3 traffic and your firewall rules permit traffic on port 8443, configure your load balancer to point to port 8443.

If you want your load balancer to pass through SSL S3 traffic but your firewall rules block traffic on port 8443, you can use IP tables to redirect the traffic from port 8443 to port 443.

HCP for cloud scale provides scripts to enable and disable IPtable redirection of S3 traffic. An additional script lists the IP addresses of affected instances.

Script to enable S3 traffic redirection

A script is included to enable redirection of S3 traffic from port 443 to port 8443.

The script is written in Python and located in the folder install_path/product/bin (for example, /opt/hcpcs/bin).

The script redirects S3 traffic from port 443 to port 8443 using the file iptables.

NoteThe script requires Python 2.
Syntax
enable_s3_redirect.py
Options and parameters

None

Example
$ enable_s3_redirect.py

This example can produce the following output:

*** PREROUTING chain in NAT table before adding Redirect
Chain PREROUTING (policy ACCEPT 135 packets, 8100 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1      14M  845M DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

-------------------------------------------------------------------------
*** PREROUTING chain in NAT table after adding Redirect
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1      14M  845M DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
2        0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 redir ports 8443

Script to disable S3 traffic redirection

A script is included to disable redirection of S3 traffic from port 443 to port 8443.

The script is written in Python and located in the folder install_path/product/bin (for example, /opt/hcpcs/bin).

The script redirects S3 traffic from port 8443 to port 443 using the file iptables.

NoteThe script requires Python 2.
Syntax
disable_s3_redirect.py
Options and parameters

None

Example
$ disable_s3_redirect.py

This example can produce the following output:

*** PREROUTING chain in NAT table before deleting Redirect
Chain PREROUTING (policy ACCEPT 3227 packets, 195K bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1      14M  845M DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
2        0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 redir ports 8443

-------------------------------------------------------------------------
*** PREROUTING chain in NAT table after deleting Redirect
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1      14M  845M DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Script to list S3 traffic redirection

A script is included to list redirection of S3 traffic.

The script is written in Python and located in the folder install_path/product/bin (for example, /opt/hcpcs/bin).

The script lists the instances running S3 Gateway system instances and writes the output to the file s3NodeIPs.txt.

Syntax
list_s3_node_ips.py username password
Options and parameters
  • username

    User name of an HCP for cloud scale user with administrative privileges.

  • password

    Password for the administrative user.

Example
$ list_s3_node_ips.py username password

This example can produce the following output:

INSTALL_DIR: /opt/hcpcs
ADMIN_CLI: /opt/hcpcs/cli/admin/admincli
IPs of S3 Gateway nodes:
172.10.24.195
172.10.24.196
172.10.24.197
172.10.24.198
172.10.24.199
Output file is located at /opt/hcpcs/s3NodeIPs.txt

Handling network changes

After your system is deployed, its network infrastructure and configuration should not change. Specifically:

  • All instance IP addresses should not change.
  • All services should continue to use the same ports.
  • All services and instances should continue to use the same network types.

If any of these things change, you will need to reinstall the system.

Safely changing an instance IP address

If you need to change the IP addresses for one or more instances in the system, use this procedure to manually change the IP addresses without risk of data loss.
NoteYou can reuse the IP addresses of retired nodes for new nodes.

For each instance whose IP address you need to change:

Procedure

  1. Move all services off of the instance. Distribute those services among all the other instances.

  2. On the instance from step 1, stop the script run using whatever tool or process you used to run it.

    For example, with systemd, run: systemctl stop hcpcs.service
  3. Remove the instance from the system.

  4. Delete the installation folder from the instance.

  5. Add the instance back to the system.

After a network change

If a network infrastructure or configuration change occurs that prevents your system from functioning with its current network settings, you need to reinstall all instances in the system.

Procedure

  1. If the Admin App is accessible, back up your system components by exporting a package.

  2. On each instance in the system:

    1. Navigate to the installation folder.

    2. Stop the run script using whatever tool or process you used to run it. For example, with systemd, run:

      systemctl stop <service-name>
    3. Run bin/stop

    4. Run the setup script, including the list of master instances:

      sudo bin/setup -i <ip-address-for-this-instance> -m
            <comma-separated-list-of-master-instance-IP-addresses>
    5. Run the run script using whatever methods you usually use to run scripts.

  3. Log into Admin App and use the wizard to set up the system.

  4. After the system has been set up, upload your package.

Volumes

Volumes are properties of services that specify where and how a service stores its data.

You can use volumes to configure services to store their data in external storage systems, outside of the system instances. This allows data to be more easily backed up or migrated.

Volumes can also allow services to store different types of data in different locations. For example, a service might use two separate volumes, one for storing its logs and the other for storing all other data.

NoteSome functions described here are not used with HCP for cloud scale. They are not visible in the System Management application, or have no effect when used.
Example

In this example, service A runs on instance 101. The service's Log volume stores data in a folder on the system instance and the service's Data volume stores data in an NFS mount.

GUID-5C3336AA-9A7D-479B-9964-1AC0F768FF80-low.png

Creating and managing volumes

Volumes are separated into these groups, depending on how they are created and managed:

  • System-managed volumes are created and managed by the system. When you deploy the system, you can specify the volume driver and options that the system should use when creating these volumes.

    After the system is deployed, you cannot change the configuration settings for these volumes.

  • User-managed volumes can be added to services and job types after the system has been deployed. These are volumes that you manage; you need to create them on your system instances before you can configure a service or job to use them.
    NoteThe built-in services don't support adding user-managed volumes.
Volume drivers

When configuring a volume, you specify the volume driver that it uses. The volume driver determines how and where data is stored.

Because services run in Docker containers on instances in the system, volume drivers are provided by Docker and other third-party developers, not by the system itself. For information about volume drivers you can use, see the applicable Docker or third-party developer's documentation.

By default, all services do not use volume drivers but instead use the bind-mount setting. With this setting, data for each service is stored within the system installation folder on each instance where the service runs.

For more information on volume drivers, see the Docker documentation.

Viewing volumes

The System Management application shows this information about the Docker volumes used by jobs and services:

  • Name: The unique identifier for the volume.
  • Type: Either of these:
    • System: The volume is managed automatically for you by the system.
    • User: You need to manage the volume yourself.
  • Capacity: Total storage space free in the volume.
  • Used: Space used by the job or service.
  • Pool: The volume category, as defined by the service or job that uses the volume.
NoteSome functions described here are not used with HCP for cloud scale. They are not visible in the System Management application, or have no effect when used.

For each volume, you can also view this information about the volume driver that controls how the volume stores data:

  • Volume driver: The name of the volume driver.
  • Option/Value: The command-line options used to create the volume and their corresponding values. The available options and valid values for those options are determined by the volume driver.

Viewing service volumes

Procedure

  1. Select Dashboard > Services.

  2. Click the service you want.

  3. Click the Volumes tab.

Instances

A system is made up of one or more instances of the software. This section includes information on adding and removing instances to the system.

About master and worker instances

Master instances are special instances that run an essential set of services, including:

  • Admin-App service
  • Cluster-Coordination service
  • Synchronization service
  • Service-Deployment service

Non-master instances are called workers. Workers can run any services except for those listed previously.

Single-instance systems have one master instance while multi-instance systems have either one or three master instances.

ImportantYou cannot add master instances to a system after it's installed. You can, however, add any number of worker instances.

Single-instance systems versus multi-instance systems

A system can have a single instance or can have multiple instances (four or more).

NoteEvery instance must meet the minimum RAM, CPU, and disk space requirements.
Single instance

A single-instance system is useful for testing and demonstration purposes. A single-instance system requires only a single server or virtual machine and can perform all product functionality.

However, a single-instance system has these drawbacks:

  • It has a single point of failure. If the instance hardware fails, you lose access to the system.

  • With no additional instances, you cannot choose where to run services. All services run on the single instance.

Therefore, a since-instance system is unsuitable for use in a production environment.

Multiple instances

A multi-instance system is suitable for use in a production environment because it offers these advantages over a single-instance system:

  • You can control how services are distributed across the multiple instances, providing improved service redundancy, scale out, and availability.
  • A multi-instance system can survive instance outages. For example, with a four-instance system running the default distribution of services, the system can lose one instance and still remain available.
  • Performance is improved as work can be performed in parallel across instances.
  • You can add additional instances to the system at any time.
NoteYou cannot change a single-instance system into a production-ready multi-instance system by adding new instances. This is because you cannot add master instances. Master instances are special instances that run a particular set of HCP for cloud scale services. Single-instance systems have one master instance. Multi-instance systems have at least three.

By adding additional instances to a single-instance system, your system still has only one master instance, meaning there is still a single point of failure for the essential services that only a master instance can run.

Three-instance system considerations

Three-instance systems should have only a single master instance. If you deploy a three-instance system where all three instances are masters, the system may not have enough resources to do much beyond running the master services.

Requirements for running system instances

This section lists the hardware and operating system requirements for running system instances.

Hardware requirements

To install HCP for cloud scale on on-premises hardware for production use, you must provision at least four instances (nodes) with sufficient CPU, RAM, disk space, and networking capabilities. This table shows the hardware resources required for each instance of an HCP for cloud scale system for a minimum qualified configuration and a standard qualified configuration.

Resource

Minimum configuration

Standard configuration

CPU

Single CPU, 10-core

Dual CPU, 20+-core

RAM

128 GB

256 GB

Available disk space

(4) 1.92 TB SSD, RAID10

(8) 1.92 TB SSD, RAID10

Network interface controller (NIC)(2) 10 Gb Ethernet NICs(2) 25 Gb Ethernet NICs or

(4) 10 GB Ethernet NICs

ImportantEach instance uses all available RAM and CPU resources on the server or virtual machine on which it's installed.

Software requirements

The following table shows the minimum requirements and best-practice software configurations for each instance in an HCP for cloud scale system.

ResourceMinimumBest
IP addresses(1) static(2) static
Firewall Port AccessPort 443 for SSL traffic

Port 8000 for System Management App GUI

Port 8888 for Content Search App GUI

Same
Network TimeIP address of time service (NTP)Same

Operating system and Docker minimum requirements

Each server or virtual machine you provide must have the following:

  • 64-bit Linux distribution
  • Docker version installed: Docker Community Edition 18.09.0 or later
  • IP and DNS addresses configured

Additionally, you should install all relevant patches on the operating system and perform appropriate security hardening tasks.

ImportantThe system cannot run with Docker versions before 1.13.1.

To execute scripts provided with the product on RHEL, you should install Python.

Operating system and Docker qualified versions

This table shows the operating system, Docker, and SELinux configurations with which the HCP for cloud scale system has been qualified.

ImportantAn issue in Docker Enterprise Edition 19.03.15 and resolved in 20.10.5 prevented HCP for cloud scale deployment. Do not install any version of Docker Enterprise Edition above 19.03.14 and below 20.10.5.
Operating systemDocker versionDocker storage configurationSELinux setting
Red Hat Enterprise Linux 8.4Docker Community Edition 19.03.12 or lateroverlay2Enforcing

If you are installing on Amazon Linux, before deployment, edit the file /etc/security/limits.conf on every node to add the following two lines:

*  hard  nofile  65535
*  soft  nofile  65535

Docker considerations

The Docker installation folder on each instance must have at least 20 GB available for storing the Docker images.

Make sure that the Docker storage driver is configured correctly on each instance before installing the product. After you install the product, to change the Docker storage driver you must reinstall the product. To view the current Docker storage driver on an instance, run:

docker info

Core dumps can fill a host's file system, which can result in host or container instability. Also, if your system uses the data at rest encryption (DARE) feature, encryption keys are written to the dump file. It's best to disable core dumps.

To enable SELinux on the system instances, you need to use a Docker storage driver that SELinux supports. The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.

If you are using the Docker devicemapper storage driver:

  • Make sure that there's at least 40 GB of Docker metadata storage space available on each instance. The product needs 20 GB to install successfully and an additional 20 GB to successfully update to a later version.

    To view Docker metadata storage usage on an instance, run:

    docker info

  • On a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, the product might not have enough space to run.

SELinux considerations

  • You should decide whether you want to run SELinux on system instances and enable or disable it before installing additional software on the instance.

    Enabling or disabling SELinux on an instance needs a restart of the instance.

    To view whether SELinux is enabled on an instance, run: sestatus

  • To enable SELinux on the system instances, you need to use a Docker storage driver that SELinux supports.

    The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.

Supported browsers

The HCP for cloud scale web applications support these web browsers:

  • Google Chrome latest
  • Mozilla Firefox latest

Time source

If you are installing a multi-instance system, each instance should run NTP (network time protocol) and use the same external time source. For information, see support.ntp.org.

Adding new instances

You might want to add additional instances to the system if:

  • You want to improve system performance.
  • You are running out of disk space on one or more instances.
ImportantYou cannot add new master instances, only new worker instances.

However, these situations might also be improved by adding additional CPU, RAM, or disks to the instances you already have.

Before adding a new instance

  • Obtain the product installation file. When adding an instance, you unpack and deploy this file on a bare-metal server or a pre-existing Linux virtual machine.
  • Record the IP address of at least one of the master instances in the system.

    If your system uses internal and external networks, you need to record both the internal and external IP addresses for the master instances.

    You can view instance IP addresses on the Instances page in the Admin App.

  • Ensure that the new instances you are adding meet the minimum hardware, OS, and networking requirements.
  • Record the Docker volume drivers currently used by services and jobs across all existing instances. You need to install all of these volume drivers on the new instance that you're adding.

    To find the volume drivers currently in use by your system, run this command on each system instance:

    docker volume ls

    Take note of each value for the DRIVER field.

Install Docker on each server or virtual machine

On each server or virtual machine that is to be an HCP for cloud scale instance:

Procedure

  1. In a terminal window, verify whether Docker 1.13.1 or later is installed:

    docker --version
  2. If Docker is not installed or if you have a version before 1.13.1, install the current Docker version suggested by your operating system.

    The installation method you use depends on your operating system. See the Docker website for instructions.

Configure Docker on each server or virtual machine

Before installing the product, configure Docker with settings suitable for your environment. For guidance on configuring and running Docker, see the applicable Docker documentation.

Procedure

  1. Ensure that the Docker installation folder on each instance has at least 20 GB available for storing the product Docker images.

  2. Ensure that the Docker storage driver is configured correctly on each instance. After installation, changing the Docker storage driver needs reinstallation of the product.

    To view the current Docker storage driver on an instance, run: docker info .
  3. To enable SELinux on the system instances, use a Docker storage driver that SELinux supports.

    The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.
  4. If you are using the Docker devicemapper storage driver, ensure that there's at least 40 GB of Docker metadata storage space available on each instance.

    The product needs 20 GB to install successfully and an additional 20 GB to successfully update to a later version.To view Docker metadata storage usage on an instance, run: docker info

Next steps

On a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, the product might not have enough space to run.

(Optional) Configure Docker volume drivers

If any services or jobs on your system are using Docker volume drivers (that is, not the bind-mountsetting) for storing data, you need to install those volume drivers on the new instance that you are adding. If you don't, jobs and services might fail to run on the new instance.

Volume drivers are provided by Docker and other third-party developers, not by the system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.

Configure maximum map count setting

You need to configure a value in the file sysctl.conf.

Procedure

  1. On each server or virtual machine that is to be a system instance, open the file /etc/sysctl.conf.

  2. Append this line: vm.max_map_count = 262144

    If the line already exists, ensure that the value is greater than or equal to 262144.
  3. Save and close the file.

Optional: Enable or disable SELinux on each server or virtual machine

You should decide whether you want to run SELinux on system instances before installation.

Procedure

  1. Enable or disable SELinux on each instance.

  2. Restart the instance.

Configure the firewall rules on each server or virtual machine

Before you begin

Determine the port values currently used by your system. To do this, on any instance, view the file install_path/config/network.config.
On each server or virtual machine that is to be a system instance:

Procedure

  1. Edit the firewall rules to allow communication over all network ports that you want your system to use. You do this using a firewall management tool such as firewalld.

  2. Restart the server or virtual machine.

Install and configure NTP

Install NTP (Network Time Protocol) on the new server or virtual machine and configure it to use the same time source as the other system instances. For information, see http://support.ntp.org.

Run Docker on each server or virtual machine

On each server or virtual machine that is to be a system instance, you need to start Docker and keep it running. You can use whatever tools you typically use for keeping services running in your environment.

For example, to run Docker using systemd:

Procedure

  1. Verify that Docker is running:

    systemctl status docker
  2. If Docker is not running, start the docker service:

    sudo systemctl start docker
  3. (Optional) Configure the Docker service to start automatically when you restart the server or virtual machine:

    sudo systemctl enable docker

Unpack the installation package

On each server or virtual machine that is to be a system instance:

Procedure

  1. Download the installation package hcpcs-version_number.tgz and the MD5 checksum file hcpcs-version_number.tgz.md5 and store them in a folder on the server or virtual machine.

  2. Verify the integrity of the installation package. For example:

    md5sum -c hcpcs-version_number.tgz.md5If the package integrity is verified, the command displays OK.
  3. In the largest disk partition on the server or virtual machine, create a folder named install_path/hcpcs. For example:

    mkdir /opt/hcpcs
  4. Move the installation package from the folder where you stored it to install_path/hcpcs. For example:

    mv hcpcs-version_number.tgz /opt/hcpcs/hcpcs-version_number.tgz
  5. Navigate to the installation folder. For example:

    cd /opt/hcpcs
  6. Unpack the installation package. For example:

    tar -zxf hcpcs-version_number.tgzA number of directories are created within the installation folder.
    Note

    If you encounter problems unpacking the installation file (for example, the error message "tar: This does not look like a tar archive"), the file might have been packed multiple times during download. Use the following commands to fully extract the file:

    $ gunzip hcpcs-version_number.tgz

    $ mv hcpcs-version_number.tar hcpcs-version_number.tgz

    $ tar -zxf hcpcs-version_number.tgz

  7. Run the installation script install:

    ./install
    Note
    • Don't change directories after running the installation script. The following tasks are performed in your current folder.
    • The installation script can be run only one time on each instance. You cannot rerun this script to try to repair or upgrade a system instance.

Set up networking

On each server or virtual machine that is to be a system instance, edit the file installation-folder/config/network.config file to be identical to the copies of the same file on the existing system instances.

Run the setup script on each server or virtual machine

Before you begin

Note
  • When installing a multi-instance system, make sure you specify the same list of master instance IP addresses on every instance that you are installing.
  • When entering IP address lists, do not separate IP addresses with spaces. For example, the following is correct:

    sudo install_path/hcpcs/bin/setup ‑i 192.0.2.4 ‑m 192.0.2.0,192.0.2.1,192.0.2.3

On each server or virtual machine that is to be a system instance:

Procedure

  1. Run the script setup with the applicable options:

    OptionDescription
    -iThe external network IP address for the instance on which you're running the script.
    -IThe internal network IP address for the instance on which you're running the script.
    -mComma-separated list of external network IP addresses of each master instance.
    -MComma-separated list of internal network IP addresses of each master instance.
    Use the following table to determine which options to use:
    Number of instances in the systemNetwork type usageOptions to use
    MultipleSingle network type for all servicesEither:

    -i and -m

    or -I and -M

    MultipleInternal for some services, external for othersAll of these:

    -i, -I, -m, -M

    SingleSingle network type for all servicesEither -i or -I
    SingleInternal for some services, external for othersBoth -i and -I

Results

NoteIf the terminal displays Docker errors when you run the setup script, ensure that Docker is running.
The following example sets up a single-instance system that uses only one network type for all services:

sudo install_path/hcpcs/bin/setup -i 192.0.2.4

To set up a multi-instance system that uses both internal and external networks, type the command in this format:

sudo install_path/hcpcs/bin/setup ‑i external_instance_ip ‑I internal_instance_ip ‑m external_master_ips_list ‑M internal_master_ips_list

For example:

sudo install_path/hcpcs/bin/setup ‑i 192.0.2.4 ‑I 10.236.1.0 ‑m 192.0.2.0,192.0.2.1,192.0.2.3 ‑M 10.236.1.1,10.236.1.2,10.236.1.3

The following table shows sample commands to create a four-instance system. Each command is entered on a different server or virtual machine that is to be a system instance. The resulting system contains three master instances and one worker instance and uses both internal and external networks.

Instance internal IPInstance external IPMaster or workerCommand
192.0.2.110.236.1.1Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.1 ‑i 10.236.1.1 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.210.236.1.2Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.2 ‑i 10.236.1.2 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.310.236.1.3Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.3 ‑i 10.236.1.3 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.410.236.1.4Worker sudo install_path/hcpcs/bin/setup ‑I 192.0.2.4 ‑i 10.236.1.4 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3

Start the application on each server or virtual machine

On each server or virtual machine that is to be a system instance:

Procedure

  1. Start the application script run using whatever methods you usually use to run scripts.

    ImportantEnsure that the method you use can keep the run script running and can automatically restart it if a server restarts or there is other availability event.

Results

When the service starts, the server or virtual machine automatically joins the system as a new instance.

Here are some examples of how you can start the script:

  • You can run the script in the foreground:

    sudo install_path/product/bin/run

    When you run the run script this way, the script does not automatically complete, but instead remains running in the foreground.

  • You can run the script as a service using systemd:
    1. Copy the product .service file to the appropriate location for your OS. For example:

      cp install_path/product/bin/product.service /etc/systemd/system

    2. Enable and start the product.service service:
      sudo systemctl enable product.service
      sudo systemctl start product.service

Configure services and jobs on the new instances

The system does not automatically begin running services on the instances you've added. You need to manually configure services to run on those new instances.

Also, depending on how your jobs are configured, jobs might not run on the new instances that you've added. You need to manually configure jobs to run on the instances.

Viewing instances

You can use the Admin App, CLI, and REST API to view a list of all instances in the system.

Viewing all instances

To view all instances, in the Admin App, click Dashboard > Instances.

The page shows all instances in the system. Each instance is identified by its IP address.

GUID-F6C9E700-DA8E-4C87-9084-8BD9DA87D8B1-low.png

This table describes the information shown for each instance.

PropertyDescription
State
  • Up: The instance is reachable by other instances in the system.
  • Down: The instance cannot be reached by other instances in the system.
ServicesThe number of services running on the instance.
Service Units

The total number of service units for all services and job types running on the instance, out of the best-practice service unit limit for the instance.

An instance with a higher number of service units is likely to be more heavily used by the system than an instance with a lower number of service units.

The Instances page displays a blue bar for instances running less than the best-practice service unit limit.

The Instances page displays a red bar for instances running more than the best-practice service unit limit.

GUID-701CFEF4-B49C-4DCD-8B83-2FA6EB8A8D03-low.png

Load AverageThe load averages for the instance for the past one, five, and ten minutes.
CPUThe sum of the percentage utilization for each CPU core in the instance.
Memory Allocated

This section shows both:

  • The amount of RAM on the instance that's allocated to all services running on that instance.
  • The percentage of this allocated RAM to the total RAM for the instance.
Memory TotalThe total amount of RAM for the instance.
Disk UsedThe current amount of disk space that your system is using in the partition on which it is installed.
Disk FreeThe amount of free disk space in the partition in which your system is installed.

Viewing the services running on an instance

To view the services running on an individual instance, in the Admin App:

Procedure

  1. Click Dashboard > Instances.

  2. Select the instance you want.

    The page lists all services running on the instance.

    For each service, the page shows:

    • The service name
    • The service state:
      • Healthy: The service is running normally.
      • Unconfigured: The service has yet to be configured and deployed.
      • Deploying: The system is currently starting or restarting the service. This can happen when:
        • You move the service to run on a completely different set of instances.
        • You repair a service.
      • Balancing: The service is running normally, but performing background maintenance.
      • Under-protected: In a multi-instance system, one or more of the instances on which a service is configured to run are offline.
      • Failed: The service is not running or the system cannot communicate with the service.
    • CPU Usage: The current percentage CPU usage for the service across all instances on which it's running.
    • Memory: The current RAM usage for the service across all instances on which it's running.
    • Disk Used: The current total amount of disk space that the service is using across all instances on which it's running.

Related CLI commands

getInstance

listInstances

Related REST API methods

GET /instances

GET /instances/{uuid}

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Removing instances

You typically remove an instance from your system in these situations:

  • You are retiring the hardware on which the instance runs.
  • The instance is in the Down state and cannot be recovered.
  • You want to run a system with fewer instances.

(Optional) Shut down the instance you want to remove

If the instance has already shut down because of a failure, the instance is in the Down state. Your system automatically tries to move all services from that instance to other instances in the system. After all services have been moved, the instance is eligible for removal. Continue to the next step.

If the instance that you want to remove is in the Up state, you need to shut the instance down yourself before you can remove it from the system.

Procedure

  1. Move all the services that the instance is currently running to the other instances in the system.

    ImportantShutting down an instance without first moving its services can cause data loss.
  2. If the system has jobs configured to run on only the failed instance, configure those jobs to run on other instances.

  3. Stop the run script from running. You do this using whatever method you're currently using to run the script.

  4. Run this command to stop all system Docker containers on the instance:

    sudo <installation-folder>/bin/stop
  5. Delete the system Docker containers:

    1. List all Docker containers:

      sudo docker ps
    2. Note the container IDs for all containers that use a com.hds.ensemble or com.hitachi.aspen image.

    3. Delete each of those containers:

      sudo docker rm <container-id>
  6. Delete the system Docker images:

    1. List all Docker images:

      sudo docker images
    2. Note the image IDs for all images that use a com.hds.ensemble or com.hitachi.aspen repository.

    3. Delete each of those images:

      sudo docker rmi <image-id>
  7. Delete the system installation folder:

    rm -rf /<installation-folder>

Remove the shut-down instance from the system

Admin App instructions

Procedure

  1. Select Dashboard > Instances.

  2. Click the instance you want to remove.

  3. Click Remove Instance.

Related CLI commands

deleteInstance

Related REST API methods

DELETE /instances/{uuid}

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Replacing a failed instance

If an instance suffers an unrecoverable failure, you need to replace that instance with a new one.

Procedure

  1. In the Admin App, view the Instances page to determine whether the failed instance was a master instance.

  2. Select a new server or virtual machine to add as a new instance to the system.

  3. Remove the failed instance from the system.

    WARNINGIf the failed instance was a master, after you remove the instance, you have only two master instances remaining. If any other instance fails while you are in this state, the system becomes completely unavailable until you add a third master back to the system by completing this procedure.
  4. Add the replacement instance to the system.

    ImportantIf the instance you are replacing was a master instance, when you run setup on the replacement instance, the list of masters that you specify for the -m option needs to include:
    • The IP addresses of the two remaining healthy master instances.
    • The IP address of the new instance that you're adding.

    For example, in a system with master instance IPs ranging from 192.0.2.1 to 192.0.2.3 and you are replacing instance 192.0.2.3 with 192.0.2.5, run setup with these options:

    sudo bin/setup -i 192.0.2.5 -m 192.0.2.1,192.0.2.2,192.0.2.5

    This does not apply when you're replacing a worker instance. In that case, specify the IP addresses of the three existing masters.

Plugins

Plugins are modular pieces of code that allow your system to perform specific activities.

Plugins are organized in groups called plugin bundles. When adding or removing plugins from your system, you work with plugin bundles, not individual plugins.

Viewing installed plugins

Use the Admin App, CLI commands, or REST API methods to view all plugin bundles and individual plugins that have been installed. You can view all individual plugins at the same time or filter the list based on plugin type.

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Plugins.

    The Plugin Bundles tab shows all installed plugin bundles.

  3. To view all individual plugins, click the All Plugins tab.

Related CLI commands

listPlugins

Related REST API methods

GET /plugins

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Upgrading plugin bundles

To upgrade plugins, upload a new version of the bundle that contains those plugins.

You can select which version of the plugin bundle is the active one (that is, the one that connectors or stages will use). If you select the new version, all connectors and stages immediately begin using the new versions of the plugins in the bundle.

You can change the active plugin bundle version at any time.

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Plugins.

  3. Click Upload Bundle.

  4. In the Upload Plugins window, drag and drop the new version of the plugin bundle.

  5. In the list of plugin bundles, click the row for the plugin bundle version that you want.

    If the bundle you uploaded isn't listed, click Reload Plugins.
  6. Click Set Active.

Related CLI commands

uploadPlugin

setPluginBundleActive

Related REST API methods

POST /plugins/upload

POST /plugins/bundles/{name}/{bundleVersion}/active

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Setting the active plugin bundle version

If you've uploaded multiple versions of a plugin bundle, only one version can be active at a time. The active plugin bundle version is the one that the system uses.

When you change the active version of a plugin bundle, any workflow tasks that contain connectors and stages that use the bundle immediately begin using the new active version.

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Plugins.

  3. Click the row for the plugin bundle version that you want.

  4. Click Set Active.

Related CLI commands

setPluginBundleActive

Related REST API methods

POST /plugins/bundles/{name}/{bundleVersion}/active

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Deleting plugin bundles

To delete plugins from your system, you delete plugin bundles from the system. You cannot delete individual plugins.

NoteYou cannot delete a plugin bundle, or any of its versions, if any of that bundle's plugins are currently in use by the system.
Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Plugins.

  3. Click the delete icon (The delete icon resembles a trash can) for the plugin bundle you want to remove.

Related CLI commands

deletePluginBundle

Related REST API methods

DELETE /plugins/bundles/{name}/{bundleVersion}

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Packages

You can back up all of your system configuration by exporting packages. You can back up these package files and use them to restore your configurations in the event of a system failure.

Exporting packages

You can export the configurations for system components as package files. You can back up these package files and use them to restore your configurations in the event of a system failure.

After exporting a package, you can store it in one of your data sources. When you want to import the package, your system can retrieve it directly from the data source.

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Packages.

  3. Click Export.

  4. Under Customize Package Description, give your package a name and, optionally, a description.

  5. Under Configuration, select any configuration items to export.

  6. Under Plugins, select any plugin bundles to export.

  7. Under Components, select any available components to export.

    If you select one component but not the components it depends on, the Admin App prompts you to add those missing components to the package.

  8. Under Validate, make sure your package is valid and then click Download Package.

  9. When your package downloads, click Download Package to download it again, or click Finish to exit.

Related CLI commands

buildPackage

downloadPackage

Related REST API methods

POST /package/build

POST /package/download

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Importing packages

To import a package, you can upload it from your computer or have your system retrieve it from one of your data sources.

After you import the package, your system runs a system task to synchronize the package components across all instances in your system.

The system can have only one imported package at a time.

Note
  • Importing a component that already exists on your system might cause conflicts and should be avoided.
  • You need to manually resolve conflicts with Components, while conflicts with Configuration are handled automatically by the system.
Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Packages.

  3. Click Import.

  4. Do one of these:

    • If the package you want to import is stored on your computer, click and drag the package file into the Upload Package window.
    • If the package you want to import is stored in one of your data sources, click the Click to Upload window and then browse for the package file.
  5. Under Package Description, review the description and then click Continue.

  6. Under Configuration, select any configuration items to import.

  7. Under Plugins, select any plugin bundles to import.

  8. Under Components, select any available components to import.

  9. Under Validate, make sure your package is valid and then click Install Package.

    Your system starts a system task to install the package components on all instances in the system.

    You can monitor the task from the current page or from the Processes page.

  10. When the task has completed and all package components have been installed, clicking Complete Install deletes the package from the system.

Related CLI commands

uploadPackage

loadPackage: loads a package from a data connection

installPackage

getPackageStatus

deletePackage

Related REST API methods

POST /package (Uploads a package)

POST /package/load (Loads a package from a data connection)

POST /package/install

GET /package (Gets the status of the imported package)

DELETE /package

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Setting a login welcome message

You can use the Admin App, CLI commands, or REST API methods to set a welcome message for the Admin App. The message appears on the app's login page.

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Security.

  3. On the Settings tab, type a message in the Single Sign-on Welcome Message field.

  4. Click Update.

Related CLI commands

editSecuritySettings

Related REST API methods

PUT /security/settings

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Updating the system

You can update system software by uploading new update packages.

ImportantHitachi Vantara does not provide updates or security fixes for the host operating systems running on system instances.
Before updating

In order for a system to be updated:

  • All instances and services must be healthy.
  • Each service must be running on its best-practice number of instances.
  • Each instance must have enough disk space for the update.
  • All required network ports must be available on each instance.
  • There can be no in-progress package uploads or installations.
During an update
  • System availability considerations:
    • Instances shut down and restart one at a time during the upgrade. Other instances remain online and able to service requests.
    • The Admin App remains available but is in a read-only state. You can monitor the progress of the update, but you cannot make any other changes to the system.
    NoteSystems with two instances are more susceptible to availability outages during an update than systems with three or more instances.
Verifying update status

As an update runs, you can view its progress on the Configuration > Update page. Also on this page, you can view all system events related to system updates.

Results of an update

After an update, the system runs a new version of the software. Additionally:

  • If any of the built-in plugins were updated, your system automatically uses the latest versions of those plugins.
  • If an existing service is replaced with a new service, the system automatically runs that new, replacement service.
  • If any new services were added, you might need to manually configure those services to run on the system instances.
Update errors

If errors occur during an update, the Update page displays information about each error and also displays a Retry button for starting the update over again. Some errors might not be resolved by restarting the update.

If you encounter errors during an update, contact your authorized service provider.

New services and components added during an update

A system update might add new services or plugins. You need to manually configure your system to start using these new components; your system does not start using them automatically.

Applying a system update

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Update.

  3. Click the Install tab.

  4. Click and drag the file into the Upload window.

    The update file is uploaded and the system verifies that the file is valid. This might take several minutes.
  5. On the Update page, click View in the Update Status window.

    The Verify & Apply Update page displays information about the contents of the update.
  6. To start the update, click Apply Update.

Results

The system verifies that it is ready to be updated. If it isn't, the update stops. In this case, you need to correct the problems before the update can continue.

Related CLI commands

getUpdateStatus

installUpdate

deleteUpdate

loadUpdate

uploadUpdate

Related REST API methods

GET /update

POST /update/install

DELETE /update/package

POST /update/package

POST /update/package/load — (Retrieves update package from a data connection)

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Viewing update history

You can view a list of all updates that have previously been applied to your system.

For each update, you can view the corresponding version number and the date on which it was installed.

Admin App instructions

Procedure

  1. Select Dashboard > Configuration.

  2. Click Update.

Results

The History tab lists previously installed versions and when each was installed.

For example:

History tab on Configuration Update page, showing that an update version replaced a previous version and the date/time of the update

Related CLI commands

getUpdateHistory

Related REST API methods

GET /update/history

You can get help on specific REST API methods for the Admin App at REST API - Admin.

Removing the system

To completely remove your system, do the following on all instances:

Procedure

  1. Stop the run script from running. You do this using whatever method you're currently using to run the script.

  2. Run this command to stop all system Docker containers on the instance:

    sudo <installation-folder>/bin/stop
  3. Delete the system Docker containers:

    1. List all Docker containers:

      sudo docker ps
    2. Note the container IDs for all containers that use a com.hds.ensemble or com.hitachi.aspen image.

    3. Delete each of those containers:

      sudo docker rm <container-id>
  4. Delete the system Docker images:

    1. List all Docker images:

      sudo docker images
    2. Note the image IDs for all images that use a com.hds.ensemble or com.hitachi.aspen repository.

    3. Delete each of those images:

      sudo docker rmi <image-id>
  5. Delete the system installation folder:

    rm -rf /<installation-folder>

 

  • Was this article helpful?