Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Installing HCP for cloud scale

These procedures describe how to install the HCP for cloud scale software.

Once you've installed the software, you log in and deploy the system.

Items and information you need

To install an HCP for cloud scale system, you need the appropriate installation package containing the product installation tarball (archive) file hcpcs-version_number.tgz.

This document shows the directory path to the HCP for cloud scale directory as install_path. The recommended directory path is /opt.

Decide how many instances to deploy

Before installing a system, you need to decide how many instances the system will have.

The minimum for a production system is four instances.

Procedure

  1. Decide how many instances you need.

  2. Select the servers or virtual machines in your environment that you will use as HCP for cloud scale instances.

Configure your networking environment

Before installing the system, you need to determine the networks and ports each HCP for cloud scale service will use.

Procedure

  1. Determine what ports each HCP for cloud scale service should use. You can use the default ports for each service or specify different ones.

    In either case, these restrictions apply:
    • Every port must be accessible from all instances in the system
    • Some ports must be accessible from outside the system
    • All port values must be unique; no two services, whether System services or HCP for cloud scale services, can share the same port.
  2. Determine what types of networks, either internal or external, to use for each service.

    If you're using both internal and external networks, each instance in the system must have IP addresses on both your internal and external networks.

(Optional) Select master instances

You need to select which of the instances in your system will be master instances.

If you are installing a multi-instance system, the system must have either one or three master instances, regardless of the total number of instances it includes.

Important
  • For a production system, use three master instances.
  • You cannot add master instances to a system after it's installed. You can, however, add any number of worker instances.

If you are deploying a single-instance system, that instance will automatically be configured as a master instance and run all services for the system.

Procedure

  1. Select which of the instances in your system will be master instances.

  2. Make note of the master instance IP addresses.

    TipTo ensure system availability, run master instances on separate physical hardware from each other, if possible.

Install Docker on each server or virtual machine

On each server or virtual machine that is to be an HCP for cloud scale instance:

Procedure

  1. In a terminal window, check whether Docker 1.13.1 or later is installed:

    docker -version
  2. If Docker is not installed or if you have a version prior to 1.13.1, install the current Docker version suggested by your operating system.

    The installation method you use depends on your operating system. See the Docker website for instructions.

Configure Docker on each server or virtual machine

Before installing the product, configure Docker with settings suitable for your environment. For guidance on configuring and running Docker, see the applicable Docker documentation.

Procedure

  1. Ensure that the Docker installation directory on each instance has at least 20 GB available for storing the product Docker images.

  2. Ensure that the Docker storage driver is configured correctly on each instance.

    After installation, changing the Docker storage driver requires reinstallation of the product.To view the current Docker storage driver on an instance, run: docker info .
  3. If you want to enable SELinux on the system instances, use a Docker storage driver that supports it.

    The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.
  4. If you are using the Docker devicemapper storage driver, ensure that there's at least 40 GB of Docker metadata storage space available on each instance.

    The product requires 20 GB to install successfully and an additional 20 GB to successfully update to a later version.To view Docker metadata storage usage on an instance, run: docker info

Next steps

On a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, the product may not have enough space to run.

(Optional) Install Docker volume drivers

Volume drivers are provided by Docker and other third-party developers, not by the HCP for cloud scale system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.

Procedure

  1. If any services on your system are using Docker volume drivers (that is, not the bind-mount setting) for storing data, install those volume drivers on the new instance that you are adding.

    If you don't, services may fail to run on the new instance.
  2. If you want any services on your system to use Docker volume drivers for storing data (that is, instead of using the default bind-mount setting), install those volume drivers on all instances in the system.

(Optional) Enable or disable SELinux on each server or virtual machine

You should decide whether you want to run SELinux on system instances before installation.

Procedure

  1. Enable or disable SELinux on each instance.

  2. Restart the instance.

Configure maximum map count setting

You need to configure a value in the file sysctl.conf.

Procedure

  1. On each server or virtual machine that is to be a system instance, open the file /etc/sysctl.conf.

  2. Append this line: vm.max_map_count = 262144

    If the line already exists, ensure that the value is greater than or equal to 262144.
  3. Save and close the file.

Configure the firewall rules on each server or virtual machine

Before you begin

Determine the port values currently used by your system. To do this, on any instance, view the file install_path/config/network.config.
On each server or virtual machine that is to be a system instance:

Procedure

  1. Edit the firewall rules to allow communication over all network ports that you want your system to use. You do this using a firewall management tool such as firewalld.

  2. Restart the server or virtual machine.

Run Docker on each server or virtual machine

On each server or virtual machine that is to be a system instance, you need to start Docker and keep it running. You can use whatever tools you typically use for keeping services running in your environment.

For example, to run Docker using systemd:

Procedure

  1. Verify that Docker is running:

    systemctl status docker
  2. If Docker is not running, start the docker service:

    sudo systemctl start docker
  3. (Optional) Configure the Docker service to start automatically when you restart the server or virtual machine:

    sudo systemctl enable docker

Unpack the installation package

On each server or virtual machine that is to be a system instance:

Procedure

  1. Download the product installation package and MD5 checksum file and store them in a directory on the server or virtual machine.

  2. Verify the integrity of the installation package:

    md5sum -c product-version_number.tgz.md5If the package integrity is verified, the command displays OK.
  3. In the largest disk partition on the server or virtual machine, create a product installation directory.

    mkdir install_path/product
  4. Move the installation package from the directory where you stored it to the product installation directory.

    mv product-version_number.tgz install_path/product/product-version_number.tgz
  5. Navigate to the installation directory.

    cd install_path/product
  6. Unpack the installation package:

    tar -zxf hcpcs-version_number.tgzA number of directories are created within the installation directory.
    Note

    If you encounter problems unpacking the installation file (for example, the error message "tar: This does not look like a tar archive"), the file may have been packed more than once during download. Use the following commands to fully extract the file:

    $ gunzip product-version_number.tgz

    $ mv product-version_number.tar product-version_number.tgz

    $ tar -zxf product-version_number.tgz

  7. Run the installation script install, located within a directory matching the version number of system software used by the product software.

    sudo ./cluster/sys_ver_num/bin/installThis version number is different from the product version number. It is the only subdirectory in the directory cluster.For example:sudo ./cluster/1.4.0.260/bin/install
    Note
    • Don't change directories after running the installation script. The following tasks are performed in your current directory.
    • The installation script can be run only once on each instance. You cannot rerun this script to try to repair or upgrade a system instance.

(Optional) Reconfigure network.config on each server or virtual machine

Before you begin

Important If you want to reconfigure networking for the System services, you must complete this step before you run the setup script on each server or virtual machine.

You cannot change networking for System services after running the script run or after starting the service hcpcs.service using systemd.

If you want to change the networking settings of System services, do so in this step, before running the product startup scripts. You configure networking for HCP for cloud scale services later when using the deployment wizard.

You can change these networking settings for each service in your product:

  • The ports that the service uses
  • The network to listen on for incoming traffic, either internal or external.
To configure networking for the System services:

Procedure

  1. On each server or virtual machine that is to be an HCP for cloud scale instance, use a text editor to open the file install_path/hcpcs/config/network.config.

    The file contains two types of lines for each service:
    • Network type assignments:

      com.hds.ensemble.plugins.service.service_name_interface=[internal|external]

      For example:

      com.hds.ensemble.plugins.service.zookeeper_interface=internal

    • Port number assignments:

      com.hds.ensemble.plugins.service.service_name.port.port_name=port_number

      For example:

      com.hds.ensemble.plugins.service.zookeeper.port.PRIMARY_PORT=2181

  2. Enter new port values for the services you want to configure.

    NoteIf you reconfigure service ports, make sure that each port value you assign is unique across all services, both System services and HCP for cloud scale services.
    NoteBy default, all System services are set to internal.

    If you're only using a single network, you can leave these settings as they are. This is because all system instances are assigned both internal and external IP addresses in HCP for cloud scale; if you're only using a single network type, the internal and external IP addresses for each instance are identical.

  3. On the lines containing _interface, specify the network that the service should use. Valid values are internal and external.

  4. Save your changes and exit the text editor.

Next steps

ImportantEnsure that the file network.config is identical on all HCP for cloud scale instances.

(Optional) Reconfigure volume.config on each server or virtual machine

Before you begin

ImportantIf you want to reconfigure volumes for the System services, you must complete this step before you run the setup script on each server or virtual machine.

You cannot change volumes for System services after running the script run or after starting the service hcpcs.service using systemd.

By default, each of the System services is configured not to use volumes for storage (that is, each service uses the bind-mount option). If you want to change this configuration, you can do that now in this step, before running the product startup scripts.

TipSystem services typically do not store a lot of data, so you should favor keeping the default bind-mount setting for them.

You configure volumes for HCP for cloud scale services later when using the deployment wizard.

To configure volumes for the System services:

Procedure

  1. On each server or virtual machine that is to be an HCP for cloud scale instance, use a text editor to open the file install_path/hcpcs/config/volume.config.

    This file contains information about the volumes used by the System services. For each volume, the file contains lines that specify the following:
    • The name of the volume:

      com.hds.ensemble.plugins.service.service_name.volume_name=volume_name

      NoteDo not edit the volume names. The default volume name values contain variables (SERVICE_PLUGIN_NAME and INSTANCE_UUID) that ensure that each volume gets a unique name.
    • The volume driver that the volume uses:

      com.hds.ensemble.plugins.service.service_name.volume_driver=[volume_driver_name | bind-mount]

    • The configuration options used by the volume driver. Each option is listed on its own line:

      com.hds.ensemble.plugins.service.service_name.volume_driver_opt_option_number=volume_driver_option_and_value

      For example, these lines describe the volume that the Admin-App service uses for storing its logs:

      com.hds.ensemble.plugins.service.adminApp.log_volume_name=SERVICE_PLUGIN_NAME.INSTANCE_UUID.log
      com.hds.ensemble.plugins.service.adminApp.log_volume_driver=bind-mount
      com.hds.ensemble.plugins.service.adminApp.log_volume_driver_opt_1=hostpath=/home/hcpcs/log/com.hds.ensemble.plugins.service.adminApp/
  2. For each volume that you want to configure, you can edit the following:

    • The volume driver for the volume to use. To do this, replace bind-mount with the name of the volume driver you want.

      Volume drivers are provided by Docker and other third-party developers, not by the HCP for cloud scale system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.

    • On the line that contains _opt, the options for the volume driver.

      For information about the options you can configure, see the documentation for the volume driver that you're using.

      ImportantOption/value pairs can specify where data is written in each volume. These considerations apply:
      • Each volume that you can configure here must write data to a unique location.
      • The SERVICE_PLUGIN and INSTANCE_UUID variables cannot be used in option/value pairs.
      • Make sure the options and values you specify are valid. Invalid options or values could cause system deployment to fail or volumes to be set up incorrectly. For information on configuration, see the volume driver's documentation.
      TipCreate test volumes using the command docker volume create with your option/value pairs. Then, to test the volumes you've created, run the command docker run hello-world with the option --volume.
These lines show a service that has been configured to use the local-persist volume driver to store data:
com.hds.ensemble.plugins.service.marathon.data_volume_name=SERVICE_PLUGIN_NAME.INSTANCE_UUID.data
com.hds.ensemble.plugins.service.marathon.data_volume_driver=local-persist
com.hds.ensemble.plugins.service.marathon.data_volume_driver_opt_1=mountpoint=/home/hcpcs/data/com.hds.ensemble.plugins.service.marathon/

Run the setup script on each server or virtual machine

Before you begin

Note
  • When installing a multi-instance system, make sure you specify the same list of master instance IP addresses on every instance that you are installing.
  • When entering IP address lists, do not separate IP addresses with spaces. For example, the following is correct:

    sudo install_path/hcpcs/bin/setup ‑i 192.0.2.4 ‑m 192.0.2.0,192.0.2.1,192.0.2.3

On each server or virtual machine that is to be a system instance:

Procedure

  1. Run the script setup with the applicable options:

    -iThe external network IP address for the instance on which you're running the script
    -IThe internal network IP address for the instance on which you're running the script
    -mComma-separated list of external network IP addresses of each master instance
    -MComma-separated list of internal network IP addresses of each master instance
    Use this table to determine which options you need to use:
    Number of instances in the systemNetwork type usageOptions to use
    MultipleSingle network type for all servicesEither:

    -i and -m

    or -I and -M

    MultipleInternal for some services, external for othersAll of these:

    -i, -I, -m, -M

    SingleSingle network type for all servicesEither -i or -I
    SingleInternal for some services, external for othersBoth -i and -I

Results

NoteIf the terminal displays Docker errors when you run the setup script, ensure that Docker is running.

For information, see Run Docker on each server or virtual machine.

This example sets up a single-instance system that uses only one network type for all services:

sudo install_path/hcpcs/bin/setup -i 192.0.2.4

To set up a multi-instance system that uses both internal and external networks, enter the command in this format:

sudo install_path/hcpcs/bin/setup ‑i external_instance_ip ‑I internal_instance_ip ‑m external_master_ips_list ‑M internal_master_ips_list

For example:

sudo install_path/hcpcs/bin/setup ‑i 192.0.2.4 ‑I 10.236.1.0 ‑m 192.0.2.0,192.0.2.1,192.0.2.3 ‑M 10.236.1.1,10.236.1.2,10.236.1.3

This table shows sample commands to create a four-instance system. Each command is entered on a different server or virtual machine that is to be a system instance. The resulting system contains three master instances and one worker instance, and uses both internal and external networks.

Instance internal IPInstance external IPMaster or workerCommand
192.0.2.110.236.1.1Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.1 ‑i 10.236.1.1 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.210.236.1.2Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.2 ‑i 10.236.1.2 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.310.236.1.3Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.3 ‑i 10.236.1.3 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.410.236.1.4Worker sudo install_path/hcpcs/bin/setup ‑I 192.0.2.4 ‑i 10.236.1.4 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3

Start the application on each server or virtual machine

On each server or virtual machine that is to be a system instance:

Procedure

  1. Start the application script run using whatever methods you usually use to run scripts.

    ImportantEnsure that the method you use can keep the run script running and can automatically restart it in case of a server reboot or other availability event.

Results

Once the service starts, the server or virtual machine automatically joins the system as a new instance.

Here are some examples of how you can start the script:

  • You can run the script in the foreground:

    sudo install_path/product/bin/run

    When you run the run script this way, the script does not automatically complete, but instead remains running in the foreground.

  • You can run the script as a service using systemd:
    1. Copy the product .service file to the appropriate location for your OS. For example:

      cp install_path/product/bin/product.service /etc/systemd/system

    2. Enable and start the product.service service:
      sudo systemctl enable product.service
      sudo systemctl start product.service

(Optional) Configure NTP

If you are installing a multi-instance system:

Procedure

  1. Configure NTP (network time protocol) to have each instance use the same time source.

    For information on NTP, see http://support.ntp.org/.

Use the service deployment wizard

After creating all of your instances and starting HCP for cloud scale, use the service deployment wizard. This wizard runs the first time you log in to the HCP for cloud scale system.

To run the service deployment wizard:

Procedure

  1. Open a web browser and go to https://instance_ip_address:8000.

    The Deployment Wizard starts.
  2. Set and confirm the password for the main admin account.

    ImportantDo not lose or forget this password.
    When you have defined the password, click Continue.
  3. On the next page of the deployment wizard, enter the cluster host name in the Cluster Hostname/IP Address field and click Continue.

    Omitting this can cause links in the System Management application to function incorrectly.
  4. On the next page of the deployment wizard, confirm the cluster topology. Verify that all instances that you expect to see are listed.

    If some instances are not displayed, in the Instance Discovery panel, click Refresh instances until they appear.When you have confirmed the cluster topology, click Continue.
  5. On the next page of the deployment wizard, confirm the advanced configuration settings.

    ImportantIf you want to reconfigure networking for the HCP for cloud scale services or volume usage or services, you must do so now, before deploying the system.

    For information on configuration, see Networking.

    When you have confirmed the configuration settings, click Continue.
  6. On the last page of the deployment wizard, to deploy the cluster, click Deploy Cluster.

    After a brief delay, the message "Deployment in progress" is displayed, and instances of services are started.
  7. When the wizard is finished, the message "Setup Complete" is displayed. Click Finish.

    The Applications page opens.

Results

Service instances are deployed and the HCP for cloud scale system is ready to use.
NoteIf you configured the System services networking incorrectly, the System Management application may not appear as an option on the Applications page. This can happen, for example, if the network.config file is not identical on all instances. For error information, view the file install_path/hcpcs/config/cluster.config or the output information logged by the script run.

To fix this issue, do the following:

  1. Stop the script run. You can do this using whatever method you're currently using to run the script.
  2. Run this command to stop all HCP for cloud scale Docker containers on the instance:

    sudo install_path/hcpcs/bin/stop

  3. Delete the contents of the directory install_path/hcpcs from all instances.
  4. Delete any Docker volumes created during the installation:

    docker volume rm volume-name

  5. Begin the installation again from the step where you unpack the installation package.
NoteThe following messages indicate that the deployment process failed to initialize a Metadata Gateway service instance:
  • If the deployment process repeatedly tries and fails to reach a node, it displays this message: "Failed to initialize all MetadataGateway instances. Please re-deploy the system."
  • If the deployment process detects an existing Metadata Gateway partition on a node, it displays this message: "Found existing metadata partitions on nodes, please re-deploy the system."
If you see either message, you can't resolve the issue by clicking Retry. Instead, you must reinstall the HCP for cloud scale software.

(Optional) Configure networks for services

To change networking settings for the HCP for cloud scale services:

Procedure

  1. On the Advanced Configuration page, select the service to configure.

  2. On the Network tab:

    1. Configure the ports that the service should use.

      NoteIf you reconfigure service ports, make sure that each port value you assign is unique across all services, both System services and HCP for cloud scale services.
    2. For each service, specify the network, either Internal or External, to which the service should bind.

      NoteBy default, the HCP for cloud scale and services have the External network selected, and the System services have the Internal network selected.

      If you're only using a single network, you can leave these settings as they are. This is because all system instances are assigned both internal and external IP addresses in HCP for cloud scale; if you're only using a single network type, the internal and external IP addresses for each instance are identical.

(Optional) Configure volumes for services

To change volume usage:

Procedure

  1. On the Advanced Configuration page, select a service to configure.

  2. Click the Volumes tab. This tab displays the system-managed volumes that the service supports. By default, each built-in service has both Data and Log volumes.

  3. For each volume, provide Docker volume creation information:

    1. In the Volume Driver field, specify the name of the volume driver that the volume should use. To have the volume not use any volume driver, specify bind-mount, which is the default setting.

      NoteVolume drivers are provided by Docker and other third-party developers, not by the HCP for cloud scale system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.
    2. In the Volume Driver Options section, in the Option and Value fields, specify any optional parameters and their corresponding values for the volume driver:

      • If you're using the bind-mount setting, you can edit the value for the hostpath option to change the path where the volume's data is stored on each system instance. However, this must be a path within the HCP for cloud scale installation directory.
      • If you're using a volume driver:
        1. Click on the trashcan icon to remove the default hostpath option. This option applies only when you are using the bind-mount setting.
        2. Type the name of a volume driver option in the Option field. Then type the corresponding parameter for that option in the Value field.
        3. Click on the plus-sign icon to add the option/value pair.
        4. Repeat this procedure for each option/value pair you want to add.

      Option/value pairs can specify where data is written to in each volume. These considerations apply:

      • Each service instance must write its data to a unique location. A unique location could be a filesystem or a unique path on a shared external storage server.

        In this illustration, green arrows show acceptable configurations and red arrows show unacceptable configurations where multiple service instances are writing to the same volume, or multiple volumes are backed by the same storage location:

        GUID-FB71E293-1CF8-44F3-9832-8D508752EA7C-low.png

      • For persistent (that is, non-floating) services, favor using the ${container_inst_uuid} variable in your option/value pairs. For persistent services, this variable resolves to a value that's unique to each service instance.

        This is especially useful if the volume driver you're using is backed by a shared server. By providing a variable that resolves to a unique value, the volume driver can use the resolved variable to create unique directories on the shared server.

        However, some volume drivers, such as Docker's local volume driver, do not support automatic directory creation. If you're using such a volume driver, you need to create volume directories yourself. For an example of how to handle this, see the Docker local volume driver example below.

      • Floating services do not support volumes that are backed by shared servers. This is because floating services do not have access to variables that resolve to unique values per service instance.
      • Make sure the options and values you specify are valid. Invalid options or values could cause system deployment to fail or volumes to be set up incorrectly. For information on volumes, see the volume driver's documentation.
      TipCreate test volumes by use the command docker volume create with your option/value pairs. Then, to test the volumes you created, use the command docker run hello-world --volume.

      Available variables

      You can include these variables when configuring volume options:

      • ${install_dir} is the product installation directory.
      • ${data_dir} is equal to ${install_dir}/data
      • ${log_dir} is equal to ${install_dir}/log
      • ${volume_def_name} is the name of the volume you are configuring.
      • ${plugin_name} is the name of the underlying service plugin.
      • ${container_inst_uuid} is the UUID for the Docker container in which the service instance runs. For floating services, this is the same value for all instances of the service.
      • ${node_ip} is the IP address for the system instance on which the service is running. This cannot be used for floating services.
      • ${instance_uuid} is the UUID for the system instance. This cannot be used for floating services. For services with multiple types, this variable resolves to the same value for all instances of the service, regardless of their types.

      Example: bind-mount configuration for Database service log volume

      The built-in Database service has a volume called log, which stores the service's logs. The log volume has this default configuration:

      • Volume driver: bind-mount
      • Option: hostname, Value: ${log_dir}/${plugin_name}/${container_inst_uuid}

      With this configuration, once the system is deployed, logs for the Database service are stored at a unique path on each system instance that runs the Database service:

      install_path/hcpcs/log/com.hds.ensemble.plugins.service.cassandra/service-instance-uuid

      Example: Docker local volume driver for Database service log volume

      Alternatively, you could configure the Database service to use Docker's built-in local volume driver to store logs on an NFS server. To do this:

      1. Log in to your NFS server.
      2. Create a directory.
      3. Within that directory, create one directory for each of the instances in your system. Name each one using the instance IP address.
        NoteIn this example, you need to create these directories yourself because the local storage driver will not create them automatically.
      4. Back in the system deployment wizard, in the Volume Driver field, specify local
      5. Specify these options and values:
        OptionValue
        typenfs
        oaddr= nfs-server-ip,rw
        device:/path-to-directory-from-step-ii/${node_ip}

        With this configuration, each instance of the Database service stores its logs in a different directory on your NFS server.

  4. Repeat this procedure for each service that you want to configure.

(Optional) Check the created volumes

Before you begin

If you configured the service volumes to use volume drivers, use these commands to list and view the Docker volumes created on all instances in the system:

docker volume ls

docker volume inspect volume_name

If volumes were created incorrectly, you need to redo the system installation:

Procedure

  1. Stop the run script from running. You do this using whatever method you're currently using to run the script.

  2. Stop all HCP for cloud scale Docker containers on the instance:

    sudo install_path/hcpcs/bin/stop
  3. Delete the contents of the directory install_path/hcpcs from all instances.

  4. Delete any Docker volumes created during the installation:

    docker volume rm volume_name
  5. Begin the installation again from the point where you unpack the installation package.

(Optional) Distribute services among system instances

By default, when you install and deploy a multi-instance system, the system automatically runs each service (except Dashboard) on its recommended number of instances.

However, if you've installed more than four instances, some instances may not be running any services at all. As a result, these instances are under-utilized. You should manually distribute services to run across all instances in your system.

Moving and scaling floating services

For floating services, instead of specifying the specific instances on which the service runs, you can specify a pool of eligible instances, any of which can run the service.

Moving and scaling services with multiple types

When moving or scaling a service that has multiple types, you can simultaneously configure separate rebalancing operations for each type.

Recommendations

Here are some guidelines for distributing services across instances:

  • Avoid running multiple services with high service unit costs together on the same instance.
  • On master instances, avoid running any services besides those classified as System services.

Considerations

  • You cannot remove a service from an instance if doing so would cause or risk causing data loss.
  • Service relocation operations may take a long time to complete and may impact system performance while they are running.
  • Instance requirements vary from service to service. Each service defines the minimum and maximum number of instances on which it can run.

Configuring the service relocation operations manually

To manually reconfigure a service relocation operation, in the Admin App:

Procedure

  1. Select Services.

  2. Locate a service that you want to scale or move and click Configure.

  3. On the Scale tab, if the service has more than one type, select the instance type that you want to scale.

  4. If the service is a floating service, you are presented with options for configuring an instance pool:

    1. In the Service Instances field, specify the number of instances on which the service should be running at any time.

    2. Configure the instance pool:

      • To have the service run on any instance in the system, select the All Available Instances option.

        With this option, the service can be restarted on any instance, including instances that were added to the system after the service was configured.

      • To have the service run on a specific set of instances, deselect the All Available Instances option. Then:
        • To remove an instance from the pool, select it from the Instance Pool list on the left and click Remove Instances.
        • To add an instance to the pool, select it from the Available Instances list on the right and click Add Instances.
  5. If the service is a non-floating service, you are presented with options for selecting the specific instances that the service should run on. Do one or both of these, then click Next:

    • To remove the service from the instances it's currently on, select one or more instances from the list on the left and click Remove Instances.
    • To add the service to other instances, select one or more instances from the Available Instances list on the right and click Add Instances.
  6. Click Update Service.

Configure the system for your users

Once your system is up and running, you can begin configuring it for your users.

For information about these procedures, see the applicable topic in the help that's available from the HCP for cloud scale application.

Procedure

  1. Set an IDP and creating user accounts.

  2. Define storage objects.

  3. Obtain S3 authorization credentials.