Installing HCI
This chapter describes how to install a system by deploying a number of software instances.
After you've set up all the instances that you want, you log into the Admin App to deploy the system.
Items you need
To install a system, you need the HCI-<version-number>.tgz file.
This archive file includes the software installation files needed to install your HCI instance.
Considerations for Solr backup and restore
To utilize the Solr backup and restore functionality of HCI, the following prerequisites need to be met on your system:
- An external, dedicated NFS mount point for each HCI cluster.
- A directory created on each node in your HCI cluster called
solrBackups
, located at the following path:install_path/solrBackups
- The file system from step 1 mounted on each node of the directory listed in step 2.NoteThe mechanism used to mount the file system needs to be able to persist through node reboot.
- Sufficient disk space available on the mounted file system in order to successfully backup your HCI index.
- Sufficient disk space reserved on your HCI nodes in order to successfully restore your HCI index.
It is recommended that all of the above mentioned mount points be set up prior to your installation of HCI.
For more information about Solr backup and restore after installing HCI, refer to the Workflow Designer Help.
HCI installation process
HCI installation consists of the following steps:
- Decide how many instances to deploy
- Configure your networking environment
- Optional: Select master instances
- Install Docker on each server or virtual machine
- Configure Docker on each server or virtual machine
- Optional: Install Docker volume drivers
- Optional: Enable or disable SELinux on each server or virtual machine
- Configure maximum map count setting
- Configure the firewall rules on each server or virtual machine
- Run Docker on each server or virtual machine
- Unpack the installation package
- (Optional) Reconfigure network.config on each server or virtual machine
- (Optional) Reconfigure volume.config on each server or virtual machine
- Run the setup script on each server or virtual machine
- Start the application on each server or virtual machine
- Optional: Configure NTP
- Access deployment wizard
- Deploy the system
- Verify the created volumes
- Distribute services among system instances
- Configure the system for your users
Decide how many instances to deploy
The minimum for a production system is four instances.
Procedure
Decide how many instances you need.
Select the servers or virtual machines in your environment that you intend to use as HCI instances.
Configure your networking environment
Procedure
Determine what ports each HCI service should use. You can use the default ports for each service or specify different ones.
In either case, these restrictions apply:- Every port must be accessible from all instances in the system.
- Some ports must be accessible from outside the system.
- All port values must be unique; no two services, whether System services or HCI services, can share the same port.
Determine what types of networks, either internal or external, to use for each service.
If you're using both internal and external networks, each instance in the system must have IP addresses on both your internal and external networks.
Optional: Select master instances
If you are installing a multi-instance system, the system must have either one or three master instances, regardless of the total number of instances it includes.
You need to select which of the instances in your system will be master instances.
If you are installing a multi-instance system, the system must have either one or three master instances, regardless of the total number of instances it includes.
- For a production system, use three master instances.
- You cannot add master instances to a system after it's installed. You can, however, add any number of worker instances.
If you are deploying a single-instance system, that instance will automatically be configured as a master instance and run all services for the system.
Procedure
Select which of the instances in your system are intended as master instances.
Make note of the master instance IP addresses.
NoteTo ensure system availability, run master instances on separate physical hardware from each other, if possible.
Install Docker on each server or virtual machine
On each server or virtual machine that is to be an HCI instance:
Procedure
In a terminal window, verify whether Docker 1.13.1 or later is installed:
docker --version
If Docker is not installed or if you have a version before 1.13.1, install the current Docker version suggested by your operating system.
The installation method you use depends on your operating system. See the Docker website for instructions.
Configure Docker on each server or virtual machine
Procedure
Ensure that the Docker installation folder on each instance has at least 20 GB available for storing the product Docker images.
Ensure that the Docker storage driver is configured correctly on each instance. After installation, changing the Docker storage driver needs reinstallation of the product.
To view the current Docker storage driver on an instance, run: docker info .To enable SELinux on the system instances, use a Docker storage driver that SELinux supports.
The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.If you are using the Docker
The product needs 20 GB to install successfully and an additional 20 GB to successfully update to a later version.To view Docker metadata storage usage on an instance, run: docker infodevicemapper
storage driver, ensure that there's at least 40 GB of Docker metadata storage space available on each instance.
Next steps
devicemapper
in loop-lvm
mode. This can cause slow performance or, on certain Linux distributions, the product might not have enough space to run.Optional: Install Docker volume drivers
Volume drivers are provided by Docker and other third-party developers, not by the HCI system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.
Procedure
If any services on your system are using Docker volume drivers (not the bind-mount setting) for storing data, install those volume drivers on the new instance that you are adding.
If you don't, services might fail to run on the new instance.If any services on your system use Docker volume drivers for storing data (instead of using the default bind-mount setting), install those volume drivers on all instances in the system.
Optional: Enable or disable SELinux on each server or virtual machine
You should decide whether you want to run SELinux on system instances before installation.
Procedure
Enable or disable SELinux on each instance.
Restart the instance.
Configure maximum map count setting
sysctl.conf
.Procedure
On each server or virtual machine that is to be a system instance, open the file /etc/sysctl.conf.
Append this line:
If the line already exists, ensure that the value is greater than or equal tovm.max_map_count = 262144
262144
.Save and close the file.
Run Docker on each server or virtual machine
On each server or virtual machine that is to be a system instance, you need to start Docker and keep it running. You can use whatever tools you typically use for keeping services running in your environment.
For example, to run Docker using systemd:
Procedure
Verify that Docker is running:
systemctl status dockerIf Docker is not running, start the
sudo systemctl start dockerdocker
service:(Optional) Configure the Docker service to start automatically when you restart the server or virtual machine:
sudo systemctl enable docker
Unpack the installation package
Procedure
Download the product installation package and MD5 checksum file and store both in a folder on the server or virtual machine.
Verify the integrity of the installation package:
md5sum -cHCI-version_number.tgz.md5
If the package integrity is verified, the command displaysOK
.In the largest disk partition on the server or virtual machine, create a product installation folder.
mkdir install_path/hciMove the installation package from the folder where you stored it to the product installation folder.
mv HCI-version_number.tgz install_path/hci/HCI-version_number.tgzNavigate to the installation folder.
cd install_path/hciUnpack the installation package:
tar -zxf HCI-version_number.tgzA number of directories are created within the installation folder.Run the
install
script:sudo ./install
Notes- Don't change directories after running the installation script. The following tasks are performed in your current folder.
- The installation script can be run only one time on each instance. You cannot rerun this script to try to repair or upgrade a system instance.
Configure the firewall rules on each server or virtual machine
Before you begin
install_path/config/network.config
.Procedure
Edit the firewall rules to allow communication over all network ports that you want your system to use. You do this using a firewall management tool such as
firewalld
.Restart the server or virtual machine.
(Optional) Reconfigure network.config on each server or virtual machine
Before you begin
You cannot change networking for System services after running the script run
or after starting HCI.service using systemd.
You can change these networking settings for each service in your product:
- The ports that the service uses.
- The network to listen on for incoming traffic, either internal or external.
Procedure
On each server or virtual machine that is to be an HCI instance, use a text editor to open the file install_path/hci/config/network.config.
The file contains two types of lines for each service:- Network type assignments:
For example:com.hds.ensemble.plugins.service.service_name_interface=[internal|external]
com.hds.ensemble.plugins.service.zookeeper_interface=internal
- Port number assignments:
For example:com.hds.ensemble.plugins.service.service_name.port.port_name=port_number
com.hds.ensemble.plugins.service.zookeeper.port.PRIMARY_PORT=2181
- Network type assignments:
Type new port values for the services you want to configure.
NoteIf you reconfigure service ports, make sure that each port value you assign is unique across all services, both System services and HCI services.NoteBy default, all System services are set tointernal
.If you're only using a single network, you can leave these settings as they are. This is because all system instances are assigned both internal and external IP addresses in HCI; if you're only using a single network type, the internal and external IP addresses for each instance are identical.
On the lines containing
_interface
, specify the network that the service should use. Valid values are internal and external.Save your changes and exit the text editor.
Next steps
network.config
is identical on all HCI instances.(Optional) Reconfigure volume.config on each server or virtual machine
Before you begin
You cannot change volumes for System services after using the run script or after starting HCI.service with systemd.
By default, each of the System services is configured not to use volumes for storage (each service uses the bind-mount option). To change this configuration, you can do that now in this step, before running the product startup scripts.
You configure volumes for HCI services later when using the deployment wizard.
To configure volumes for the System services:
Procedure
On each server or virtual machine that is to be an HCI instance, use a text editor to open the file install_path/hci/config/volume.config.
This file contains information about the volumes used by the System services. For each volume, the file contains lines that specify the following:- The name of the volume:
com.hds.ensemble.plugins.service.service_name.volume_name=volume_name
NoteDo not edit the volume names. The default volume name values contain variables (SERVICE_PLUGIN_NAME and INSTANCE_UUID) that ensure that each volume gets a unique name. - The volume driver that the volume uses:
com.hds.ensemble.plugins.service.service_name.volume_driver=[volume_driver_name | bind-mount]
- The configuration options used by the volume driver. Each option is listed on its own line:
For example, these lines describe the volume that the Admin-App service uses for storing its logs:com.hds.ensemble.plugins.service.service_name.volume_driver_opt_option_number=volume_driver_option_and_value
com.hds.ensemble.plugins.service.adminApp.log_volume_name=SERVICE_PLUGIN_NAME.INSTANCE_UUID.log com.hds.ensemble.plugins.service.adminApp.log_volume_driver=bind-mount com.hds.ensemble.plugins.service.adminApp.log_volume_driver_opt_1=hostpath=/home/hci/log/com.hds.ensemble.plugins.service.adminApp/
- The name of the volume:
For each volume that you want to configure, you can edit the following:
- The volume driver for the volume to use. To do this, replace
bind-mount
with the name of the volume driver you want.Volume drivers are provided by Docker and other third-party developers, not by the HCI system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.
- On the line that contains
_opt
, the options for the volume driver.For information about the options you can configure, see the documentation for the volume driver that you're using.
CautionOption/value pairs can specify where data is written in each volume. These considerations apply:- Each volume that you can configure here must write data to a unique location.
- The SERVICE_PLUGIN and INSTANCE_UUID variables cannot be used in option/value pairs.
- Make sure the options and values you specify are valid. Incorrect options or values can cause system deployment to fail or volumes to be set up incorrectly. For information on configuration, see the volume driver's documentation.
TipCreate test volumes using the commanddocker volume create
with your option/value pairs. Then, to test the volumes you've created, run the command docker run hello-world with the option--volume
.
- The volume driver for the volume to use. To do this, replace
com.hds.ensemble.plugins.service.marathon.data_volume_name=SERVICE_PLUGIN_NAME.INSTANCE_UUID.data com.hds.ensemble.plugins.service.marathon.data_volume_driver=local-persist com.hds.ensemble.plugins.service.marathon.data_volume_driver_opt_1=mountpoint=/home/hci/data/com.hds.ensemble.plugins.service.marathon/
Run the setup script on each server or virtual machine
Before you begin
- When installing a multi-instance system, make sure you specify the same list of master instance IP addresses on every instance that you are installing.
- When entering IP address lists, do not separate IP addresses with spaces. For example, the following is correct:
sudo install_path/hci/bin/setup ‑i 192.0.2.4 ‑m 192.0.2.0,192.0.2.1,192.0.2.3
Procedure
Run the script setup with the applicable options:
Use the following table to determine which options to use:Option Description -i The external network IP address for the instance on which you're running the script. -I The internal network IP address for the instance on which you're running the script. -m Comma-separated list of external network IP addresses of each master instance. -M Comma-separated list of internal network IP addresses of each master instance. -i IPADDRESS Displays the external instance IP address. If not specified, this value is discovered automatically. -I IPADDRESS Displays the internal instance IP address. If not specified, this value is the same as the external IP address. -d Attempts to automatically discover the real master list from the provided masters. –-hci_uid UID Allows you to set the desired user ID (UID) for the HCI USER at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--hci_gid GID Allows you to set the desired group ID (GID) for the HCI GROUP at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--mesos_uid UID Allows you to set the desired user UID for the MESOS USER at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--mesos_gid GID Allows you to set the desired GID for the MESOS GROUP at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--haproxy_uid UID Allows you to set the desired UID for the HAPROXY USER at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--haproxy_gid GID Allows you to set the desired GID for the HAPROXY GROUP at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--zk_uid UID Allows you to set the desired UID for the ZOOKEEPER USER at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.--zk_gid GID Allows you to set the desired GID for the ZOOKEEPER GROUP at install time only. ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.Number of instances in the system Network type usage Options to use Multiple Single network type for all services Either: -i and -m
or -I and -M
Multiple Internal for some services, external for others All of these: -i, -I, -m, -M
Single Single network type for all services Either -i or -I Single Internal for some services, external for others Both -i and -I
Results
setup
script, ensure that Docker is running. sudo install_path/hci/bin/setup -i 192.0.2.4
To set up a multi-instance system that uses both internal and external networks, type the command in this format:
sudo install_path/hci/bin/setup ‑i external_instance_ip ‑I internal_instance_ip ‑m external_master_ips_list ‑M internal_master_ips_list
For example:
sudo install_path/hci/bin/setup ‑i 192.0.2.4 ‑I 10.236.1.0 ‑m 192.0.2.0,192.0.2.1,192.0.2.3 ‑M 10.236.1.1,10.236.1.2,10.236.1.3
The following table shows sample commands to create a four-instance system. Each command is entered on a different server or virtual machine that is to be a system instance. The resulting system contains three master instances and one worker instance and uses both internal and external networks.
Instance internal IP | Instance external IP | Master or worker | Command |
192.0.2.1 | 10.236.1.1 | Master | sudo install_path/hci/bin/setup ‑I 192.0.2.1 ‑i 10.236.1.1 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 |
192.0.2.2 | 10.236.1.2 | Master | sudo install_path/hci/bin/setup ‑I 192.0.2.2 ‑i 10.236.1.2 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 |
192.0.2.3 | 10.236.1.3 | Master | sudo install_path/hci/bin/setup ‑I 192.0.2.3 ‑i 10.236.1.3 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 |
192.0.2.4 | 10.236.1.4 | Worker | sudo install_path/hci/bin/setup ‑I 192.0.2.4 ‑i 10.236.1.4 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 |
Start the application on each server or virtual machine
Start the application script
run
using whatever methods you usually use to run scripts.ImportantEnsure that the method you use can keep therun
script running and can automatically restart it in the event of a server restart or other availability event.- You can run the script in the foreground: sudo install_path/hci/bin/run
When executed this way, the
run
script does not automatically complete, but instead remains running in the foreground. - You can run the script as a service using
systemd
:- Open the HCI.service file in a text editor, located in
install_path/bin
. - Verify that the following two lines have the correct
install_path
:ExecStart=install_path/hci/bin/run
ExecStopPost=install_path/hci/bin/stop
- Save the file.
- Copy the HCI.service file to the appropriate location for your OS:
cp install_path/hci/bin/HCI.service /etc/systemd/system
- Enable and start HCI.service:
sudo systemctl daemon-reload sudo systemctl enable HCI.service sudo systemctl start HCI.service
- Open the HCI.service file in a text editor, located in
NoteWhen you enable HCI.service, systemctl might display this message:The unit files have no [Install] section. They are not meant to be enabled using systemctl. Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it.
Depending on your OS, HCI.service may or may not have successfully been enabled.3) A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...).
To avoid this, make sure that you move HCI.service to the appropriate location, typically
/etc/systemd/system
.- You can run the script in the foreground: sudo install_path/hci/bin/run
Optional: Configure NTP
Procedure
Configure NTP (network time protocol) so that each instance uses the same time source.
For information on NTP, see http://support.ntp.org/.
Access deployment wizard
After creating all of your instances, you need to go to the service deployment wizard in the Admin App.
Alternatively, if you configured the System services networking incorrectly, the Admin App might fail to appear. This can happen, for example, if the network.config
file is not identical on all instances. For error information, view the file install_path/hci/config/cluster.config
or the output information logged by the run
script.
To fix this issue, do the following:
- Stop the
run
script. You can do this using whatever method you're currently using to run the script. - Run this command to stop all HCI Docker containers on the instance:
sudo install_path/hci/bin/stop
- Delete the contents of the folder
install_path/hci
from all instances. - Delete any Docker volumes created during the installation:
docker volume rm volume-name
- Begin the installation again from Unpack the installation package.
To access the service deployment wizard:
Procedure
Open a web browser and go to:
https://instance_ip_address:8000
On the Welcome page, set a password for the admin user account. Then click Set Admin Password.
ImportantDo not lose or forget this password.On the Licensing page:
- If you have your purchased license file, drag and drop it into the Upload License section.
- If you've purchased a license but have not yet received it, make note of the value in the System ID section on the Licensing page and contact your sales representative.
- To use the system for a limited amount of time with the pre-installed trial license, click Continue.
- If for some reason the trial license failed to install, there is a copy included in the HCI-<version-number>.tgz installation package that you can upload to the Licensing page. The trial license is located in the installation package at:
install_path/product/<version>/trial-<version>.plk
On the Set Cluster Hostname/IP page, specify the hostname for your system. Omitting this can cause links in the Admin App to function incorrectly.
On the Choose Deployment page, select the HCI deployment type that you purchased, either Hitachi Content Search or HCM. Then click Continue.
The Confirm Cluster Topology page shows all detected instances. If your system includes multiple instances, make sure that all instances that you expect to see are listed.
(optional) Configure service networking
Procedure
Click the Click here link in the Advanced Network Configuration section.
On the Services tab, select a service to configure.
On the Networks tab:
Optionally, configure the ports that the service should use.
If you reconfigure service ports, make sure that each port value you assign is unique across all services, both System services and HCI services.Optionally, for each service, specify which network the service should bind to, either Internal or External
By default, the Search-App, Monitor-App, and Admin-App services have the External option selected and all other services are set to Internal.If you're only using a single network, you can leave these settings as they are. This is because all system instances are assigned both internal and external IP addresses in HCI; if you're only using a single network type, the internal and external IP addresses for each instance are identical.
(optional) Configure volumes for services and jobs
Procedure
Click the Click here link in the Advanced Configuration section.
Click the Services or Jobs tab and select a service or job type to configure.
Click the Volumes tab. This tab displays the system-managed volumes that the service supports. By default, each built-in service has both Data and Log volumes.
For each volume, provide Docker volume creation information:
In the Volume Driver field, specify the name of the volume driver that the volume should use. To not use any volume driver, specify bind-mount, which is the default setting.
Note- Volume drivers are provided by Docker and other third-party developers, not by the Content Intelligence system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.
- The Workflow-Agent job type supports only the default bindmount setting. You cannot specify a volume driver for this job type.
In the Option and Value fields, specify any optional parameters and their corresponding values for the volume driver:
- If you're using the bind-mount setting, you can edit the value for the hostpath option to change the path where the volume's data is stored on each system instance. However, this must be a path within the Content Intelligence installation folder.
- If you're using a volume driver:
- Click the delete icon to remove the default hostpath option. This option applies only when you are using the bind-mount setting.
- Type the name of a volume driver option in the Option field. Then type the corresponding parameter for that option in the Value field.
- Click the plus-sign icon to add the option/value pair.
- Repeat this procedure for each option/value pair you want to add.
- For considerations regarding adding option/value pairs, see Considerations for option/value pairs.
Repeat this procedure for each service or job type that you want to configure.
Considerations for option/value pairs
- Each service instance must write its data to a unique location. A unique location can be a file system or a unique path on a shared external storage server.
In this illustration, green arrows show acceptable configurations and red arrows show unacceptable configurations where multiple service instances are writing to the same volume, or multiple volumes are backed by the same storage location:
- For persistent (non-floating) services, favor using the ${container_inst_uuid} variable in your option/value pairs. For persistent services, this variable resolves to a value that's unique to each service instance. See Available variables.
This is especially useful if the volume driver you're using is backed by a shared server. By providing a variable that resolves to a unique value, the volume driver can use the resolved variable to create unique directories on the shared server.
However, some volume drivers, such as Docker's local volume driver, do not support automatic directory creation. If you're using such a volume driver, you need to create volume directories yourself. For an example of how to handle this, see Example: Docker local volume driver for Database service log volume.
- Floating services do not support volumes that are backed by shared servers. This is because floating services do not have access to variables that resolve to unique values per service instance. See Available variables.
For services with multiple types, consider specifying the type name as a part of the path to where service instances of that type write their data:
/example/typeA/${node_ip}
/example/typeB/${node_ip}
For information about the ${node_ip} variable, see Available variables.
- Make sure the options and values you specify are valid. Incorrect options or values can cause system deployment to fail or volumes to be set up incorrectly. For information on volumes, see the volume driver's documentation.
You can include these variables when configuring volume options:
-
${install_dir}
is the product installation directory. -
${data_dir}
is equal to${install_dir}/data
-
${log_dir}
is equal to${install_dir}/log
-
${volume_def_name}
is the name of the volume you are configuring. -
${plugin_name}
is the name of the underlying service plugin. -
${container_inst_uuid}
is the UUID for the Docker container in which the service instance or job runs. For floating services, this is the same value for all instances of the service. -
${node_ip}
is the IP address for the system instance on which the service or job is running. This cannot be used for floating services. -
${instance_uuid}
is the UUID for the system instance. This cannot be used for floating services. For services with multiple types, this variable resolves to the same value for all instances of the service, regardless of their types.
The built-in Database service has a volume called log, which stores the service's logs. The log volume has this default configuration:
-
Volume driver: bind-mount
-
Option: hostname, Value: ${log_dir}/${plugin_name}/${container_inst_uuid}
With this configuration, after the system is deployed, logs for the Database service are stored at a unique path on each system instance that runs the Database service:
/<install-dir>/log/com.hds.ensemble.plugins.service.cassandra/service-instance-uuid
For example:
/home/hci/log/com.hds.ensemble.plugins.service.cassandra/12345678-1234-1234-1234-123456789012
Alternatively, you can configure the Database service to use Docker's built-in local volume driver to store logs on an NFS server. To do this:
- Log in to your NFS server.
- Create a directory.
- Within that directory, create one directory for each of the instances in your system. Name each one using the instance IP address. NoteIn this example, you need to create these directories yourself because the local storage driver will not create them automatically.
- Back in the system deployment wizard, in the Volume Driver field, specify local
- Specify these options and values:
Option Value type nfs
o addr=nfs-server-ip,rw
device :/path-to-directory-created-in-step b above/${node_ip}
With this configuration, each instance of the Database service stores its logs in a different directory on your NFS server.
Deploy the system
Procedure
Click Deploy Single Instance or Deploy Cluster (multi-instance), as appropriate.
The system deployment starts.Click the link View Deployment Details to view the progress of the deployment.
Verify the created volumes
If you configured the service volumes to use volume drivers in (optional) Configure volumes for services and jobs, use these commands to list and view the Docker volumes created on all instances in the system:
-
docker volume ls
-
docker volume inspect volume_name
If volumes were created incorrectly, you need to redo the system installation:
Procedure
Stop the
run
script from running. You do this using whatever method you're currently using to run the script.Run this command to stop all Content Intelligence Docker containers on the instance:
sudo install_path/hci/bin/stop
Delete the contents of the folder
install_path/hci
from all instances.Delete any Docker volumes created during the installation:
docker volume rm volume_name
Begin the installation again from Unpack the installation package.
Distribute services among system instances
By default, when you install and deploy a multi-instance system, the system automatically runs each service (except Dashboard) on its required number of instances. For example, the Index service runs on three instances.
However, if you've installed more than four instances, some instances might not be running any services at all. As a result, these instances are underused. You should manually distribute services to run across all instances in your system.
Moving and scaling floating services
For floating services, instead of specifying the specific instances on which the service runs, you can specify a pool of eligible instances, any of which can run the service.
Moving and scaling services with multiple types
When moving or scaling a service that has multiple types, you can simultaneously configure separate rebalancing for each type.
Best Practices
- Moving or scaling services can cause document failures during a workflow task. Before moving or scaling a service, you should either pause all running workflow tasks or wait for them to complete.
- Avoid running multiple services with high service unit costs together on the same instance.
Ideally, each of these services should run by itself on an instance:
-
- Database
- Index
- On master instances, avoid running any services besides those classified as System services.
- To use your instances evenly, try to deploy a comparable number of service units on each instance.
Considerations
- Instance requirements vary from service to service. Each service defines the minimum and maximum number of instances on which it can run.
- You cannot remove a service from an instance if doing so causes or risks causing data loss.
- Service relocation might take a long time to complete and can impact system performance.
Relocating services
Procedure
Select Services.
The Services page opens, displaying the services and system services.Click on the service that you want to scale or move.
Configuration information for the service is displayed.Click Scale, and if the service has more than one type, select the instance type that you want to scale.
- The next step depends on whether the service is floating or persistent (non-floating).
If the service is a floating service, you are presented with options for configuring an instance pool. For example:
In the field Service Instances, specify the number of instances on which the service should be running at any time.
Configure the instance pool:
- For the service to run on any instance in the system, select All Available Instances.
With this option, the service can be restarted on any instance in the instance pool, including instances that were added to the system after the service was configured.
- For the service to run on a specific set of instances, deselect All Available Instances. Then:
- To remove an instance from the pool, select it from the list Instance Pool, on the left, and then click Remove Instances.
- To add an instance to the pool, select it from the list Available Instances, on the right, and then click Add Instances.
- For the service to run on any instance in the system, select All Available Instances.
If the service is a persistent (non-floating) service, you are presented with options for selecting the specific instances that the service should run on. Do one or both of these, then click Next:
- To remove the service from the instances it's currently on, select one or more instances from the list Selected Instances, on the left, and then click Remove Instances.
- To add the service to other instances, select one or more instances from the list Available Instances, on the right, and then click Add Instances.
Click Update.
The Processes page opens, and the Service Operations tab displays the progress of the service update as "Running." When the update finishes, the service shows "Complete."
Next steps
Configure the system for your users
Once your system is up and running, you need to begin configuring it for your users. For information, see the applicable topic in the help that's available from the Admin App:
- Administering Hitachi Content Search
- Administering Hitachi Content Monitor