Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Installing HCP for cloud scale

The following procedures describe how to install the HCP for cloud scale software.

This module describes how to prepare for and install the HCP for cloud scale software.

After you install the software, log in and deploy the system.

Installing HCP for cloud scale using the cluster deployment tool

The cluster deployment tool automates the installation of HCP for cloud scale on one or more physical nodes in a cluster. Installation includes the operating system, Docker, and the HCP for cloud scale software. Before modifying the cluster configuration, you must change the boot configuration settings as described in the section Configuring boot settings.

This section describes how to deploy HCP for cloud scale on bare metal servers using the cluster deployment tool.

Items and information you need

To use the cluster deployment tool, the cluster must be configured as follows:

  • Preserve the default settings on the Hitachi Advanced Server DS120 before deployment.
  • Ensure that the network interface names begin with ens, eth, or enp.
  • The Hitachi Advanced Server DS120 servers should have a RAID controller that treats all the hard drives as a single virtual drive with at least 500 GB of free space.
  • Ensure that the hard drives included in the RAID1 and/or RAID10 servers contain only 2 out of 4 hard drives in the RAID. If you require more hard drives, delete RAID1 first and then create a new RAID10 with two spans.
  • Ensure that the jump server has access to internet during the throughout deployment.
  • Ensure that the switch side uplinks of all the remaining nodes are configured as regular access ports (not as port channels, LACP or LAG) until you complete the installation.
  • Keep the USB drive connected to the jump server during the time of deployment.
  • After you complete the installation, disable PXE boot on all nodes in the cluster.
  • Use Baseboard Management Server (BMC) firmware version 4.23.06 or later and BIOS version 2.4 or later.
  • Configure the following settings only if you are using 4-port method. For more information about 4-port method, see Preparing to install using cluster deployment tool .
    • Configure separate networks for front-end and back-end communication.
    • All the nodes must have the same front-end and back-end subnetwork.
    • Do not use DHCP server on the back-end subnetwork.
    • Use DHCP server on the front-end subnetwork.

Configuring boot settings

You must perform the following boot configuration changes on all the nodes you want to install HCP for cloud scale.

ImportantFor the procedure specific to your device, see the appropriate product documentation.

Change the BIOS and boot configuration settings as follows:

  • Under Advanced, select Network Configuration:
    • Disable all the IPv6 and IPv4 HTTP devices, and then save the configuration.
    • Configure the BIOS settings to boot in UEFI (Unified Extensible Firmware Interface) mode.
    • Configure the jump node to boot first from the USB drive and then from hard drive.
    • Set the rest of the nodes to boot in UEFI mode from the back-end network.
  • Under Boot select UEFI Network Drive BBS Priorities:
    • Disable all boot options that are not in use except the Internal network port that is used for (PXE) Preboot Execution Environment boot.
    • Change the boot order (option 1) to hard drive.

Preparing a bootable USB drive and installing OS

Before you begin

If you are installing a multi-instance system, the system must have either one or three master instances, regardless of the total number of instances it includes. You must first determine the master node on which you want to mount the installation volume and the run the scripts from.
ImportantKeep the USB drive connected to the jump server during the time of deployment.

The following procedure describe how to prepare a bootable USB drive with the required images to deploy the HCP for cloud scale software using cluster deployment tool.

Procedure

  1. Plug-in the USB drive and run this command to retrieve the device name:

    lsscsi -g

  2. Run this command to copy the image file (ISO) to the USB drive:

    sudo dd if=./<ISO Name>.iso of=<SD Name> bs=1048576 status=progress

  3. After copying the image file (ISO) to the USB drive, run this command to unmount the device

    sync;sync; sudo eject <SD Name>

  4. Mount the installation media on the master node that you selected as installation node and boot the node from the installation media.

  5. On the Installation Summary screen, navigate to User Settings and then click Root Password.

    Creating a user password is a mandatory step before you can start the installation process. For optimal security, a password should be a minimum of 12 characters long and contain a combination of uppercase and lowercase letters, numbers, and special characters.

    CautionIt is your responsibility to remember your custom passwords. HCP for cloud scale does not have the ability to retrieve lost passwords. If you lose your password, it will be permanently unrecoverable.
    1. In the Password field, type your password.

    2. In the Confirm field, re-enter your password.

    3. Click Done.

  6. Click Begin Installation.

    It takes approximately 10-15 minutes for the installation to complete. The installer restarts the device after the installation is complete. However, you must manually choose to boot from hard drive when the system restarts.
    ImportantBefore you log in, ensure that you are using the Standard (X11 display server). The XWayland display server may not display the user interface configuration settings correctly. To change the display settings to Standard (X11 display server), select the configuration icon and then select Standard (X11 display server) on Xorg.
  7. Log in with your credentials:

    1. In the Username field, type your username (root).

    1. In the Password field, type your password.

Preparing to install using cluster deployment tool

Before you begin

Determine the master node to mount the installation volume and to run all the scripts. Ensure that you wipe the hard drives clean on all the servers using the command dd if=/dev/zero of=/dev/sdX bs=1G count=10 - replace /dev/sdX.

Note /dev/sdX represents the destination drive of your computer. If you are booting from a USB drive, then your destination drive (/dev/sdX) is that USB drive.

You must also identify three additional master nodes and the following information:

  • Front-end and back-end IP addresses of all nodes. It can be an address range or a comma-separated address list.
  • Hostnames for all nodes.
  • Classless Inter-Domain Routing (CIDR) block size (for example, 24) for the customer subnetwork.
  • IP address of site gateway.
  • Front-end network interface names.
  • Domain name server (DNS) name and IP address.
  • Network time protocol (NTP) server name or IP address.
  • Front-end and back-end IP addresses of HCP for cloud scale master nodes.
  • Ensure that the network interface is properly connected and configured.
Networking Options

HCP for cloud scale offer the following three networking options:

  • 4-port method: Designed for systems with two NICs, each containing a total of four ports. This method combines the one set of two ports to create one external port and the other set of two ports to create one internal port.
  • 2-port method: Designed for systems equipped with a single NIC containing two ports. In this method, the two ports are combined to form a unified bonded external port. In order to ensure seamless communication between the system and the network, you must assign same IP addresses for both the frontend and backend IP addresses.
  • Custom Networking: Designed for systems with custom networks, such as tagged VLANs, VLANs, and Virtual Machine networking.

The following procedures describe how to automatically deploy the HCP for cloud scale software using cluster deployment tool.

Procedure

  1. Copy the installation files and store them in a folder on the server.

  2. Create a folder named install_path/hcpcs on the server, using the folowing command:

    mkdir /opt/hcpcs
  3. Copy the installation package from the folder where you stored it to install_path/hcpcs using the folowing command:

    cp -r /run/media/root/RHEL-8-4-0-BaseOS-x86_64/hcpcsInstaller /opt/hcpcs/. /opt/hcpcs/hcpcsInstaller/csinstaller/start.sh
  4. Navigate to the installation folder. For example:

    cd /opt/hcpcs
  5. Select the appropriate course of action based on the networking option you have chosen.

    1. If you have selected the 4-port method option, run the server network configuration script jump_server_network_config.sh

      For example: /opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_network_config.sh -i <frontend_ip> -I <backend_ip> -p <frontend_cidr_prefix> -P <backend_cidr_prefix> -d <dns_ip> -g <gateway_ip> -n <frontend_interface_name> -N <backend_interface_name> -b <additional_frontend_interface_name> -B <additional_backend_interface_name>
      ./opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_network_config.sh -i 172.168.10.1 -I 172.168.20.1 -p 24 -P 24 -d 172.168.100.50 -g 172.168.10.200 -n ens17f0 -N ens17f1 -b ens49f0 -B ens49f1
      Use the following table to determine which options to use:
      OptionDescription
      -i <frontend_ip>The front-end IP address of the installation master node.
      -I <backend_ip>The back-end IP address of the installation master node.
      -p <frontend_cidr_prefix>The front-end CIDR block size for the customer subnetwork.
      -P <backend_cidr_prefix>The back-end CIDR block size of the customer subnetwork.
      -d <dns_ip>The front-end IP address of the DNS node.
      -g <gateway_ip>The IP address of the gateway through which all requests are routed. This supports customer sites, such as labs, with a gateway server.
      -n <frontend_interface_name>Name of the front-end network interface.
      -N <backend_interface_name>Name of the back-end network interface.
      -b <additional_frontend_interface_name>Name of the additional front-end interface.
      -B <additional_backend_interface_name>Name of the additional back-end interface.
      Note

      The script configures the network on the installation master node, allows SSH connections and download all the packages required to run the installer.

    2. If you have selected the 2-port method option, check the config.ini file to verify if the Single NIC status tag is set to true or false and then run the server network configuration script jump_server_network_config.sh

      • Verify the Single NIC status. The Single NIC status tag requires one of the following arguments:

        • true = The system has a single network interface controller (NIC) with two ports.
        • false = The system has more than one NIC.
        Note The default value is false.
      • Run the server network configuration script jump_server_network_config.sh.

      For example: /opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_network_config.sh -i <frontend_ip> -I <backend_ip> -p <frontend_cidr_prefix> -P <backend_cidr_prefix> -d <dns_ip> -g <gateway_ip> -n <frontend_interface_name> -N <backend_interface_name> -b <additional_frontend_interface_name> -B <additional_backend_interface_name> -s <true/false>

      ./opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_network_config.sh -i 172.168.10.1 -I 172.168.20.1 -p 24 -P 24 -d 172.168.100.50 -g 172.168.10.200 -n ens17f0 -N ens17f1 -b ens49f0 -B ens49f1 -s true
      Use the following table to determine which options to use:
      OptionDescription
      -s Verifies whether the Single NIC status tag is true or false.
      -i <frontend_ip>The front-end IP address of the installation master node.
      -I <backend_ip>The back-end IP address of the installation master node. If the value of the Single NIC status tag is set to 'true,' enter the front-end IP address of the installation master node.
      -p <frontend_cidr_prefix>The front-end CIDR block size for the customer subnetwork.
      -P <backend_cidr_prefix>The back-end CIDR block size of the customer subnetwork. If the value of the Single NIC status tag is set to 'true', enter the front-end CIDR block size for the customer subnetwork.
      -d <dns_ip>The front-end IP address of the DNS node.
      -g <gateway_ip>The IP address of the gateway through which all requests are routed. This supports customer sites, such as labs, with a gateway server.
      -n <frontend_interface_name>Name of the front-end network interface.
      -N <backend_interface_name>Name of the back-end network interface. If the value of the Single NIC status tag is set to 'true', enter the second interface name.
      -b <additional_frontend_interface_name>Name of the additional front-end interface.
      -B <additional_backend_interface_name>Name of the additional back-end interface.
      Note

      The script configures the network on the installation master node, allows SSH connections and download all the packages required to run the installer.

    3. If you have selected the custom networking option, you must manually configure your custom network settings on the jump node before proceeding.

      1. You must verify the CustomNetwork setting in the config.ini file. The CustomNetwork tag requires one of the following arguments:

        • true = The custom network is enabled.
        • false = The custom network is disabled.
          Note The default value is true.
      2. Navigate to the scripts folder (/etc/sysconfig/network-scripts/) and modify the jumpserver network configuration file according to your preferred settings.

        CautionThe settings and the configuration provided here is just an example. Actual configuration may vary based on your specific network setup. Consider the unique characteristics of your network when implementing any changes. Consult your network administrator for customized guidance and adjustments to align with your networking requirements.

        The following is an example of the configuration file:

        TYPE-ethernet 
        PROXY_METH00=none
        BROWSER_ONLY=no 
        BOOT_PROTO-static
        DEFROUTE-yes 
        IPADDR=173.18.24B 190 
        PREFIX-24 
        NAME-eosis192 
        GATEWAY-173.18.252.252 
        UUID=9a861c44-aa23-285e.8008-f2833541cd69 
        DEVICE=ens192 
        ONBOOT-yes 
        
        ifcfg.eosis192 229C 
      3. Ensure that you can successfully SSH into the jump node.
  6. Run this script to launch the installer tool:

    /opt/hcpcs/hcpcsInstaller/csinstaller/csinstaller.sh

    You see the HCP for cloud scale installation wizard prompting you to set up a password.

  7. Enter a unique password:

    This password is for the RHEL operating system and PXE nodes.
    Note

    Your password must meet all the following conditions:

    • It should be at least 12 characters long.
    • It must contain at least one uppercase letter, one lowercase letter, and one digit.
    • We recommend to use the same password that was used for the jump server in one of the earlier steps.
  8. Re-enter the password:

    When confirming your password, ensure that you enter it exactly as you did the first time.

Next steps

During the installation, you have the option of providing the required configuration information either automatically or manually using a configuration file.

Installing using GUI

Before you begin

Before running the installer script, you must:

  • Complete all the steps mentioned in the procedure Preparing to install using cluster deployment tool .
  • We recommend that you turn off all the nodes except the jump server, before running the csinstaller.sh script. You can turn on the nodes after clicking OK at the GUI installer prompt. The nodes will then automatically start to install RHEL.
  • Collect the following information:
    • Front-end and back-end IP addresses of all nodes. It can be an address range or a comma-separated address list.
    • Hostnames for all nodes.
    • Classless Inter-Domain Routing (CIDR) block size (for example, 24) for the customer subnetwork.
    • IP address of site gateway.
    • Front-end network interface names.
    • Domain name server (DNS) name and IP address.
    • Network time protocol (NTP) server name or IP address.
    • Front-end and back-end IP addresses of HCP for cloud scale master nodes.

The following procedures describe how to manually deploy the HCP for cloud scale software using cluster deployment tool.

Procedure

  1. Run this script to launch the installer tool:

    /opt/hcpcs/hcpcsInstaller/csinstaller/csinstaller.sh

    You see the HCP for cloud scale installation wizard.

  2. Enter the information for all the fields:

    In the Frontend IP Addresses of all nodes section:

    1. In the Address Range Start field, type the starting IP address.

    2. In the Address Range End field, type the ending IP address.

    In the Backend IP Addresses of all nodes section:

    1. In the Address Range Start field, type the starting IP address.

    2. In the Address Range End field, type the ending IP address.

    In the Number of network Interfaces section, select the number of interfaces, and then click Next.

  3. The Settings area contains the fields and entries that make up the configuration. The following table describes the fields on this page:

    SettingsDescription
    Frontend IPv4Enter the frontend IPv4 address
    Backend IPv4Enter the backend IPv4 address
    Frontend IPv4 GatewayEnter the frontend IPv4 gateway IP address.
    Enter System DNS NameEnter the system DNS server IP address.
    Enter NTP Sever IPNTP server IP addresses.
    NoteIf you do not have the IP address of the NTP server, enter 0.0.0.0
    Enter DNS Server IP listDNS Server IP addresses.
    Select OS VersionChoose the operating system. The choices are
    Select Frontend Master NodesEnter the frontend IP address list.
    Select Backend Master NodesEnter the backend IP address list.
  4. Click Next.

  5. You see the Please enter Hostnames for all nodes page. Enter the host name against each MAC address in the page.

    NoteYou must provide the MAC address list to the application either through the user interface or through a file. The interface supports only up to 10 MAC addresses. If your scale of installation is large and have more than 10 nodes, use the mac_list.txt file. The script points to this file during the installation process and populates the section.
  6. In the Summary of the Input page, click OK.

    It takes about 10 minutes to power up all the member nodes in the system. All the nodes PXE (Preboot Execution Environment ) boot from the jump node, receiving IP assignments from the range you specified.
    Important
    • Ensure that the PXE boot is completed on all the nodes.
    • Ensure that the Boot Option #1 is set to [Network:UEFI] in the BIOS of all the member nodes.

Next steps

Depending on the Networking Options method you choose at the time of installation, follow the instructions in the Deploying HCP for cloud scale software section.

Installing using the configuration file

Before modifying the config.ini file and running the installer script, you must:

  • Complete all the steps mentioned in the procedure Preparing to install using cluster deployment tool .
  • Collect the following information:
    • Front-end and back-end IP addresses of all nodes. It can be an address range or a comma-separated address list.
    • Hostnames for all nodes.
    • Classless Inter-Domain Routing (CIDR) block size (for example, 24) for the customer subnetwork.
    • IP address of site gateway.
    • Front-end network interface names.
    • Domain name server (DNS) name and IP address.
    • Network time protocol (NTP) server name or IP address.
    • Front-end and back-end IP addresses of HCP for cloud scale master nodes.

The following procedures describe how to install HCP for cloud scale software using a configuration file (config.ini) .

A configuration file (config.ini) is a text file that you can create before starting csinstaller.sh. This file provides the installer with all the necessary information such as IP addresses, CIDR, DNS name, DNS server IP, NTP server IP, time zone, MAC addresses, and master node IPs that are required to configure the cluster. It is best to use configuration file if you intend to perform repeated or large-scale deployments.

The configuration file is available in the directory: /opt/hcpcs/hcpcsInstaller/csappliance/csinstaller.

Procedure

  1. Run this command to edit the config.ini file and then press "i" on your keyboard to enable editing.

    vi /opt/hcpcs/hcpcsInstaller/csappliance/csinstaller/config.ini/config.ini

    Use the following table to determine which sections in the configuration file to update:

    SectionDescription
    IPINFOAdd IPs in this section only if you have a specific range of IP addresses. Do not add non sequential IP addresses.
    NONSEQINFOAdd non-sequential IP addresses separated by a comma.
    MACADDRESSINFOEnter Mac addresses of all the nodes. You need not provide the IP address of the first node (master node). If your scale of installation is large and have more than 10 nodes, use the mac_list.txt file. The script points to this file during the installation process and populates the section.
    TIMEZONEEnter time zone information.
    NoteYou must make sure that the time zone information is correct. The script stops executing with an error otherwise.

    The following is an example of the configuration file:

    #This flag remains true in case of custom networking and changes to 'false' in case of standard networking
    [NETWORKFLAG]
    CustomNetwork=true
    
    #Set this to 'true' in case of Single NIC to get ports forwarded externally. Else it remains as 'false'
    [NICTYPE]
    SingleNIC=false
    
    #In case of sequential IP Addresses, add the IP Addresses below and leave "NONSEQINFO" blank
    [IPINFO] #eg: 172.18.1.1
    FrontIPRangeStart = 172.19.244.190
    FrontIPRangeEnd = 172.19.244.94
    BackIPRangeStart = 172.248.190.190
    BackIPRangeEnd = 172.248.190.194
    
    #In case of non sequential IP Addresses, add the IP Addresses below and leave "IPINFO" blank
    [NONSEQIPINFO] #Add Comma separated values eg: 172.18.1.7,172.18.1.2,172.18.1.5
    FrontIPAddressList = 172.18.2.9,172.18.2.2,172.18.2.1
    BackIPAddressList =
    
    #Add Network CIDR details
    [NETWORKINFO] #eg: 172.18.1.0/24
    FrontIPNetwork = 172.19.244.0/24
    BackIPNetwork = 172.248.190.0/24
    
    #Add Network Gateway info
    [GATEWAYINFO] #eg: 172.18.1.254
    Gateway = 172.18.251.254
    
    #Add the DNS Name and DNS IP Address
    [DNSINFO] #eg: test and 172.18.4.45
    DNSName = lab.archivas.com
    DNSIP = 172.18.4.45
    
    #Enter the frontend and backend IP Addresses of the 3 master nodes
    [MASTERNODEINFO]
    Front_masters_IP1 = 172.19.244.190
    Front_masters_IP2 = 172.19.244.191
    Front_masters_IP3 = 172.19.244.192
    Back_masters_IP1 = 172.248.190.190
    Back_masters_IP2 = 172.248.190.191
    Back_masters_IP3 = 172.248.190.192
    
    #For Total nodes < 10 = Enter the Mac Addresses of all the nodes which will be PXE booted (Exclude the jump node mac id)
    #For Total nodes > 10 = In the "mac_list.txt" file, Enter the Mac Addresses of all the nodes which will be PXE booted (Exclude the jump node mac id)
    [MACADDRESSINFO]
    MacAddress1 = 42:g0:9c:30:02:01
    MacAddress2 = 42:g0:9c:30:03:01
    MacAddress3 = 42:g0:9c:30:04:01
    MacAddress4 =
    MacAddress5 =
    MacAddress6 =
    MacAddress7 =
    MacAddress8 =
    MacAddress9 =
    
    #Add the NTP Server IP address, NTP Peer Server IP address and the Timezone
    [NTPINFO]
    NTPServerIP = 172.22.355.190
    NTPPeerServerIP = 0.0.0.0
    TimeZone = America/St_Thomas 
  2. Press exit to save the changes and then enter: wq.

  3. Navigate to the installation folder. For example:

    cd /opt/hcpcs/hcpcsInstaller/csinstaller/
  4. Run the installer script csinstaller.sh .

    For example:
    /opt/hcpcs/hcpcsInstaller/csinstaller/csinstaller.sh
    The script imports a set of Python packages as well as Trivial File Transfer Protocol (TFTP), Network File System (NFS), and DHCP software. After installing these packages, the script starts the wizard, and the HCP for Cloud Scale Installer window opens.
  5. In the Summary of the Input page, verify all the information and then click OK.

    It takes about 10 minutes to power up all the member nodes in the system. All the nodes PXE (Preboot Execution Environment ) boot from the jump node, receiving IP assignments from the range you specified.
    Important
    • Ensure that the PXE boot is completed on all the nodes.
    • Ensure that the Boot Option #1 is set to [Network:UEFI] in the BIOS of all the member nodes.

Next steps

Depending on the Networking Options method you choose at the time of installation, follow the instructions in the Deploying HCP for cloud scale software section.

Deploying HCP for cloud scale software

Before you begin

  • Ensure that PXE boot has completed on all nodes and that Boot Option #1 is set to [Network:UEFI] in the BIOS of each member node.
  • Follow the instructions based on the method you selected at the time of installation: the 4-port method, the 2-port method, or custom networking.
    • If you are using the 4-port or 2-port method, skip to step 3 in the following procedure.
    • The first two steps are only applicable for custom networking configurations.

The following procedures describe how to deploy the HCP for cloud scale software.

Procedure

  1. Establish a connection with the PXE nodes and manually configure networking on each node. Ensure that you can successfully SSH into all the nodes.

    ImportantThis step is applicable only for custom networking configuration. If you are using the 4-port or 2-port method, skip this step and go to step 3.
  2. To initiate HCP for cloud scale on the PXE nodes, execute this script on the jump node. The script is located at /opt/hcpcs/hcpcsInstaller/csinstaller/start_hcpcs.sh.

    This process may take anywhere between 30 seconds to 5 minutes.

    After the script has successfully executed on all nodes, use SSH to log in to each individual node and perform the following verifications:

    1. Run docker ps and ensure that the watchdog service is running.
    2. Run systemctl status hcpcs.service.
    3. If the systemctl status hcpcs.service either shows as inactive or does not return anything and the watchdog service is not running, check the networking settings on that node.
    4. After you have resolved any networking issues, manually execute the custom_post_deploy.sh script. You can find this script in the root folder.
    ImportantThis step is applicable only for custom networking configuration. If you are using the 4-port or 2-port method, skip this step and go to step 3.
  3. Run this script to deploy HCP for cloud scale software:

    /opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_post_deploy.sh [-h] -i <frontend_ip> -I <backend_ip> -m <master_frontend_ips> -M <master_backend_ips>

    For example:

    /opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_post_deploy.sh -i 172.168.10.1 -I 172.168.20.1 -m 172.168.10.1,172.168.10.2,172.168.10.3 -M 172.168.20.1,172.168.20.2,172.168.20.3 -c 172.168.10.0/24 -C 172.168.20.0/24
    NoteThe script installs Docker-CE, applies security updates, configures the firewall, and deploys HCP for cloud scale software. For the rest of the nodes, these steps are automated as part of the boot process.
    OptionDescription
    -h (Optional) Displays command syntax.
    -i <frontend_ip>The front-end IP address of the installation master node.
    -I <backend_ip>The back-end IP address of the installation master node.
    -m <master_frontend_ips>The front-end IP addresses of the HCP for cloud-scale master nodes as a comma-separated list.
    -M <master_backend_ips>The back-end IP addresses of the HCP for cloud-scale master nodes as a comma-separated list.
    -c <frontend_cidr>The front-end network CIDR of the cluster.
    -C <backend_cidr>The back-end network CIDR of the cluster.
    After successful completion, you see the following message and the microservices admin application is launched:
    Successfully ran security hardening script 
    Successfully completed all post deploy actions
  4. Connect to the external IP address of the server.

    For example:
    https://10.0.0.1:8000

    You see the HCP for cloud scale application's welcome page. Accept the certificate and enter a new password for the administrator.

    NoteAdministrator is the only local user. You must use either Active Directory or LDAP services to create additional users.
  5. Click Continue.

  6. In the Cluster hostname/IP Address field, enter the host name and then click Continue.

    You see the Instance Discovery page.
  7. Accept the default service port and network assignments and then click Continue.

    Service deployment takes several minutes to complete. You see a progress bar showing the progress. When the deployment is complete, you see a Set Up completed message.
  8. Click Finish.

  9. Run this script to remove residual files: /opt/hcpcs/hcpcsInstaller/csinstaller/cleanup_artifacts.sh.

    This script removes all installer packages, files, and directories, and disables and removes the TFTP service. It also disables and stops the NFS and DHCP servers but does not remove them.

    If the installation failes, you can restart it. To restart a failed installation, see Restarting a failed installation.

Adding a new node

With the cluster deployment tool, you can add a new node to a cluster, replace an existing node, or replace both master and worker nodes. The cluster deployment tool allows you to install the RHEL OS, Docker, HCP for cloud scale, and other required third-party software on the new node that you want to add to the cluster.
WARNINGIf you’re replacing a node, make sure to take these steps:
  • Have system administrator privileges to perform this operation or inform your system administrator.
  • Decommission the node(s) before adding the new node to the cluster.

Before you begin

Before you perform this operation, review the following installation procedures. Completing one or all of the required steps is a prerequisite:
  1. Preparing to install using cluster deployment tool
  2. Installing using GUI
  3. Installing using the configuration file
  4. Deploying HCP for cloud scale software

To add a node, in the Admin App:

Procedure

  1. Select System Management.

    The System Management page opens, displaying the system management services.
  2. Select Instances.

    A list of instances is displayed. Check whether the new instance is listed here.
  3. Select Services.

    The Services page opens, displaying the services and system services.
  4. Select the service on the master node or workor node that you want to scale.

    • Master node: Services balance on the new master node based on the cluster load. You can scale up any service you need.
    • Worker node: By default, only the watchdog service starts. You can scale up other services as needed.
    Based on your selection, configuration information for the service is displayed.
  5. Click Scale, and if the service has more than one type, select the instance type that you want to scale.

  6. Run this script to remove residual files: /opt/hcpcs/hcpcsInstaller/csinstaller/cleanup_artifacts.sh.

    This script removes all installer packages, files, and directories, and disables and removes the TFTP service. It also disables and stops the NFS and DHCP servers but does not remove them.

    If the installation failes, you can restart it. To restart a failed installation, see Restarting a failed installation.

Restarting a failed installation

Use this step only to restart an installation if your earlier installation has failed. If one or more nodes fails to install, we recommend that you wipe the hard drives (RAID of all nodes including the jump server) before restarting the installation process.

Procedure

  1. Run this script to restart the installtion process: /opt/hcpcs/hcpcsInstaller/csinstaller/clean_run.sh.

    This script invokes the cleanup_artifacts.sh, preserves the old configuration (config.ini) file, restarts the DHCP, TFTP and other services. It deletes the network bonds and runs the start.sh script.

Installing HCP for cloud scale manually

The following procedures describe how to install the HCP for cloud scale software.

This module describes how to prepare for and install the HCP for cloud scale software.

After you install the software, log in and deploy the system.

Items and information you need

To install an HCP for cloud scale system, you need the appropriate installation package containing the product installation tarball (archive) file hcpcs-version_number.tgz.

This document shows the path to the HCP for cloud scale folder as install_path. The best folder path is /opt.

You need to determine the IP addresses of instances (nodes). It's best to use static IP addresses because if an IP address changes you must reinstall the system.

It's best to create an owner for the new files created during installation.

Decide how many instances to deploy

Before installing a system, you need to decide how many instances the system will have.

The minimum for a production system is four instances.

Procedure

  1. Decide how many instances you need.

  2. Select the servers or virtual machines in your environment that you intend to use as HCP for cloud scale instances.

Configure your networking environment

Before installing the system, you need to determine the networks and ports each HCP for cloud scale service will use.

Procedure

  1. Determine what ports each HCP for cloud scale service should use. You can use the default ports for each service or specify different ones.

    In either case, these restrictions apply:
    • Every port must be accessible from all instances in the system.
    • Some ports must be accessible from outside the system.
    • All port values must be unique; no two services, whether System services or HCP for cloud scale services, can share the same port.
  2. Determine what types of networks, either internal or external, to use for each service.

    If you're using both internal and external networks, each instance in the system must have IP addresses on both your internal and external networks.

Optional: Select master instances

If you are installing a multi-instance system, the system must have either one or three master instances, regardless of the total number of instances it includes.

You need to select which of the instances in your system will be master instances.

If you are installing a multi-instance system, the system must have either one or three master instances, regardless of the total number of instances it includes.

Important
  • For a production system, use three master instances.
  • You cannot add master instances to a system after it's installed. You can, however, add any number of worker instances.

If you are deploying a single-instance system, that instance will automatically be configured as a master instance and run all services for the system.

Procedure

  1. Select which of the instances in your system are intended as master instances.

  2. Make note of the master instance IP addresses.

    NoteTo ensure system availability, run master instances on separate physical hardware from each other, if possible.

Install Docker on each server or virtual machine

On each server or virtual machine that is to be an HCP for cloud scale instance:

Procedure

  1. In a terminal window, verify whether Docker 1.13.1 or later is installed:

    docker --version
  2. If Docker is not installed or if you have a version before 1.13.1, install the current Docker version suggested by your operating system.

    The installation method you use depends on your operating system. See the Docker website for instructions.

Configure Docker on each server or virtual machine

Before installing the product, configure Docker with settings suitable for your environment. For guidance on configuring and running Docker, see the applicable Docker documentation.

Procedure

  1. Ensure that the Docker installation folder on each instance has at least 20 GB available for storing the product Docker images.

  2. Ensure that the Docker storage driver is configured correctly on each instance. After installation, changing the Docker storage driver needs reinstallation of the product.

    To view the current Docker storage driver on an instance, run: docker info .
  3. To enable SELinux on the system instances, use a Docker storage driver that SELinux supports.

    The storage drivers that SELinux supports differ depending on the Linux distribution you're using. For more information, see the Docker documentation.
  4. If you are using the Docker devicemapper storage driver, ensure that there's at least 40 GB of Docker metadata storage space available on each instance.

    The product needs 20 GB to install successfully and an additional 20 GB to successfully update to a later version.To view Docker metadata storage usage on an instance, run: docker info

Next steps

On a production system, do not run devicemapper in loop-lvm mode. This can cause slow performance or, on certain Linux distributions, the product might not have enough space to run.

Optional: Install Docker volume drivers

Volume drivers are provided by Docker and other third-party developers, not by the HCP for cloud scale system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.

Procedure

  1. If any services on your system are using Docker volume drivers (not the bind-mount setting) for storing data, install those volume drivers on the new instance that you are adding.

    If you don't, services might fail to run on the new instance.
  2. If any services on your system use Docker volume drivers for storing data (instead of using the default bind-mount setting), install those volume drivers on all instances in the system.

Optional: Enable or disable SELinux on each server or virtual machine

You should decide whether you want to run SELinux on system instances before installation.

Procedure

  1. Enable or disable SELinux on each instance.

  2. Restart the instance.

Configure maximum map count setting

You need to configure a value in the file sysctl.conf.

Procedure

  1. On each server or virtual machine that is to be a system instance, open the file /etc/sysctl.conf.

  2. Append this line: vm.max_map_count = 262144

    If the line already exists, ensure that the value is greater than or equal to 262144.
  3. Save and close the file.

Configure the firewall rules on each server or virtual machine

Before you begin

Determine the port values currently used by your system. To do this, on any instance, view the file install_path/config/network.config.
On each server or virtual machine that is to be a system instance:

Procedure

  1. Edit the firewall rules to allow communication over all network ports that you want your system to use. You do this using a firewall management tool such as firewalld.

  2. Restart the server or virtual machine.

Run Docker on each server or virtual machine

On each server or virtual machine that is to be a system instance, you need to start Docker and keep it running. You can use whatever tools you typically use for keeping services running in your environment.

For example, to run Docker using systemd:

Procedure

  1. Verify that Docker is running:

    systemctl status docker
  2. If Docker is not running, start the docker service:

    sudo systemctl start docker
  3. (Optional) Configure the Docker service to start automatically when you restart the server or virtual machine:

    sudo systemctl enable docker

Unpack the installation package

On each server or virtual machine that is to be a system instance:

Procedure

  1. Download the installation package hcpcs-version_number.tgz and the MD5 checksum file hcpcs-version_number.tgz.md5 and store them in a folder on the server or virtual machine.

  2. Verify the integrity of the installation package. For example:

    md5sum -c hcpcs-version_number.tgz.md5If the package integrity is verified, the command displays OK.
  3. In the largest disk partition on the server or virtual machine, create a folder named install_path/hcpcs. For example:

    mkdir /opt/hcpcs
  4. Move the installation package from the folder where you stored it to install_path/hcpcs. For example:

    mv hcpcs-version_number.tgz /opt/hcpcs/hcpcs-version_number.tgz
  5. Navigate to the installation folder. For example:

    cd /opt/hcpcs
  6. Unpack the installation package. For example:

    tar -zxf hcpcs-version_number.tgzA number of directories are created within the installation folder.
    Note

    If you encounter problems unpacking the installation file (for example, the error message "tar: This does not look like a tar archive"), the file might have been packed multiple times during download. Use the following commands to fully extract the file:

    $ gunzip hcpcs-version_number.tgz

    $ mv hcpcs-version_number.tar hcpcs-version_number.tgz

    $ tar -zxf hcpcs-version_number.tgz

  7. Run the installation script install:

    ./install
    Note
    • Don't change directories after running the installation script. The following tasks are performed in your current folder.
    • The installation script can be run only one time on each instance. You cannot rerun this script to try to repair or upgrade a system instance.

(Optional) Reconfigure network.config on each server or virtual machine

Before you begin

ImportantTo reconfigure networking for the System services, you must complete this step before you run the setup script on each server or virtual machine.

You cannot change networking for System services after running the script run or after starting HCI.service using systemd.

You can change these networking settings for each service in your product:

  • The ports that the service uses.
  • The network to listen on for incoming traffic, either internal or external.
To configure networking for the System services:

Procedure

  1. On each server or virtual machine that is to be an HCP for cloud scale instance, use a text editor to open the file install_path/hcpcs/config/network.config.

    The file contains two types of lines for each service:
    • Network type assignments:

      com.hds.ensemble.plugins.service.service_name_interface=[internal|external]

      For example:

      com.hds.ensemble.plugins.service.zookeeper_interface=internal

    • Port number assignments:

      com.hds.ensemble.plugins.service.service_name.port.port_name=port_number

      For example:

      com.hds.ensemble.plugins.service.zookeeper.port.PRIMARY_PORT=2181

  2. Type new port values for the services you want to configure.

    NoteIf you reconfigure service ports, make sure that each port value you assign is unique across all services, both System services and HCI services.
    NoteBy default, all System services are set to internal.

    If you're only using a single network, you can leave these settings as they are. This is because all system instances are assigned both internal and external IP addresses in HCI; if you're only using a single network type, the internal and external IP addresses for each instance are identical.

  3. On the lines containing _interface, specify the network that the service should use. Valid values are internal and external.

  4. Save your changes and exit the text editor.

Next steps

ImportantEnsure that the file network.config is identical on all HCI instances.

(Optional) Reconfigure volume.config on each server or virtual machine

Before you begin

ImportantTo reconfigure volumes for the System services, you must complete this step before you run the setup script on each server or virtual machine.

You cannot change volumes for System services after using the run script or after starting HCI.service with systemd.

By default, each of the System services is configured not to use volumes for storage (each service uses the bind-mount option). To change this configuration, you can do that now in this step, before running the product startup scripts.

TipSystem services typically do not store a lot of data, so you should favor keeping the default bind-mount setting for them.

You configure volumes for HCI services later when using the deployment wizard.

To configure volumes for the System services:

Procedure

  1. On each server or virtual machine that is to be an HCI instance, use a text editor to open the file install_path/hci/config/volume.config.

    This file contains information about the volumes used by the System services. For each volume, the file contains lines that specify the following:
    • The name of the volume:

      com.hds.ensemble.plugins.service.service_name.volume_name=volume_name

      NoteDo not edit the volume names. The default volume name values contain variables (SERVICE_PLUGIN_NAME and INSTANCE_UUID) that ensure that each volume gets a unique name.
    • The volume driver that the volume uses:

      com.hds.ensemble.plugins.service.service_name.volume_driver=[volume_driver_name | bind-mount]

    • The configuration options used by the volume driver. Each option is listed on its own line:

      com.hds.ensemble.plugins.service.service_name.volume_driver_opt_option_number=volume_driver_option_and_value

      For example, these lines describe the volume that the Admin-App service uses for storing its logs:
      com.hds.ensemble.plugins.service.adminApp.log_volume_name=SERVICE_PLUGIN_NAME.INSTANCE_UUID.log
      com.hds.ensemble.plugins.service.adminApp.log_volume_driver=bind-mount
      com.hds.ensemble.plugins.service.adminApp.log_volume_driver_opt_1=hostpath=/home/hcpcs/log/com.hds.ensemble.plugins.service.adminApp/
  2. For each volume that you want to configure, you can edit the following:

    • The volume driver for the volume to use. To do this, replace bind-mount with the name of the volume driver you want.

      Volume drivers are provided by Docker and other third-party developers, not by the HCI system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.

    • On the line that contains _opt, the options for the volume driver.

      For information about the options you can configure, see the documentation for the volume driver that you're using.

      CautionOption/value pairs can specify where data is written in each volume. These considerations apply:
      • Each volume that you can configure here must write data to a unique location.
      • The SERVICE_PLUGIN and INSTANCE_UUID variables cannot be used in option/value pairs.
      • Make sure the options and values you specify are valid. Incorrect options or values can cause system deployment to fail or volumes to be set up incorrectly. For information on configuration, see the volume driver's documentation.
      TipCreate test volumes using the command docker volume create with your option/value pairs. Then, to test the volumes you've created, run the command docker run hello-world with the option --volume.
These lines show a service that has been configured to use the local-persist volume driver to store data:
com.hds.ensemble.plugins.service.marathon.data_volume_name=SERVICE_PLUGIN_NAME.INSTANCE_UUID.data
com.hds.ensemble.plugins.service.marathon.data_volume_driver=local-persist
com.hds.ensemble.plugins.service.marathon.data_volume_driver_opt_1=mountpoint=/home/hcpcs/data/com.hds.ensemble.plugins.service.marathon/

Run the setup script on each server or virtual machine

Before you begin

Note
  • When installing a multi-instance system, make sure you specify the same list of master instance IP addresses on every instance that you are installing.
  • When entering IP address lists, do not separate IP addresses with spaces. For example, the following is correct:

    sudo install_path/hcpcs/bin/setup ‑i 192.0.2.4 ‑m 192.0.2.0,192.0.2.1,192.0.2.3

On each server or virtual machine that is to be a system instance:

Procedure

  1. Run the script setup with the applicable options:

    OptionDescription
    -iThe external network IP address for the instance on which you're running the script.
    -IThe internal network IP address for the instance on which you're running the script.
    -mComma-separated list of external network IP addresses of each master instance.
    -MComma-separated list of internal network IP addresses of each master instance.
    -i IPADDRESSDisplays the external instance IP address. If not specified, this value is discovered automatically.
    -I IPADDRESSDisplays the internal instance IP address. If not specified, this value is the same as the external IP address.
    -dAttempts to automatically discover the real master list from the provided masters.
    –-hci_uid UIDAllows you to set the desired user ID (UID) for the HCI USER at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --hci_gid GIDAllows you to set the desired group ID (GID) for the HCI GROUP at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --mesos_uid UIDAllows you to set the desired user UID for the MESOS USER at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --mesos_gid GIDAllows you to set the desired GID for the MESOS GROUP at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --haproxy_uid UIDAllows you to set the desired UID for the HAPROXY USER at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --haproxy_gid GIDAllows you to set the desired GID for the HAPROXY GROUP at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --zk_uid UIDAllows you to set the desired UID for the ZOOKEEPER USER at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    --zk_gid GIDAllows you to set the desired GID for the ZOOKEEPER GROUP at install time only.
    ImportantThis value needs to be greater than 1000, less than or equal to 65533, and the same on all nodes in a cluster.
    Use the following table to determine which options to use:
    Number of instances in the systemNetwork type usageOptions to use
    MultipleSingle network type for all servicesEither:

    -i and -m

    or -I and -M

    MultipleInternal for some services, external for othersAll of these:

    -i, -I, -m, -M

    SingleSingle network type for all servicesEither -i or -I
    SingleInternal for some services, external for othersBoth -i and -I

Results

NoteIf the terminal displays Docker errors when you run the setup script, ensure that Docker is running.
The following example sets up a single-instance system that uses only one network type for all services:

sudo install_path/hcpcs/bin/setup -i 192.0.2.4

To set up a multi-instance system that uses both internal and external networks, type the command in this format:

sudo install_path/hcpcs/bin/setup ‑i external_instance_ip ‑I internal_instance_ip ‑m external_master_ips_list ‑M internal_master_ips_list

For example:

sudo install_path/hcpcs/bin/setup ‑i 192.0.2.4 ‑I 10.236.1.0 ‑m 192.0.2.0,192.0.2.1,192.0.2.3 ‑M 10.236.1.1,10.236.1.2,10.236.1.3

The following table shows sample commands to create a four-instance system. Each command is entered on a different server or virtual machine that is to be a system instance. The resulting system contains three master instances and one worker instance and uses both internal and external networks.

Instance internal IPInstance external IPMaster or workerCommand
192.0.2.110.236.1.1Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.1 ‑i 10.236.1.1 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.210.236.1.2Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.2 ‑i 10.236.1.2 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.310.236.1.3Master sudo install_path/hcpcs/bin/setup ‑I 192.0.2.3 ‑i 10.236.1.3 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3
192.0.2.410.236.1.4Worker sudo install_path/hcpcs/bin/setup ‑I 192.0.2.4 ‑i 10.236.1.4 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3

Start the application on each server or virtual machine

On each server or virtual machine that is to be a system instance:

Procedure

  1. Start the application script run using whatever methods you usually use to run scripts.

    ImportantEnsure that the method you use can keep the run script running and can automatically restart it in the event of a server restart or other availability event.

Results

After the service starts, the server or virtual machine automatically joins the system as a new instance.

Here are some examples of how you can start the script:

  • You can run the script in the foreground:

    sudo install_path/product/bin/run

    When you run the run script this way, the script does not automatically complete, but instead remains running in the foreground.

  • You can run the script as a service using systemd:
    1. Copy the product .service file to the appropriate location for your OS. For example:

      cp install_path/product/bin/product.service /etc/systemd/system

    2. Enable and start the product.service service:
      sudo systemctl enable product.service
      sudo systemctl start product.service

Optional: Configure NTP

If you are installing a multi-instance system:

Procedure

  1. Configure NTP (network time protocol) so that each instance uses the same time source.

    For information on NTP, see http://support.ntp.org/.

Use the service deployment wizard

After creating all of your instances and starting HCP for cloud scale, use the service deployment wizard. This wizard runs the first time you log in to the system.

To run the service deployment wizard:

Procedure

  1. Open a web browser and go to https://instance_ip_address:8000.

    The Deployment Wizard starts.
  2. Set and confirm the password for the main admin account.

    ImportantDo not lose or forget this password.
    When you have defined the password, click Continue.
  3. On the next page of the deployment wizard, type the cluster host name (as a fully qualified domain name in lowercase ASCII letters) in the Cluster Hostname/IP Address field, then click Continue.

    Omitting this can cause links in the System Management application to function incorrectly.
  4. On the next page of the deployment wizard, confirm the cluster topology. Verify that all the instances that you expect to see are listed and that their type (Master or Worker) is as you expect.

    If some instances are not displayed, in the Instance Discovery section, click Refresh Instances until they appear.When you have confirmed the cluster topology, click Continue.
  5. On the next page of the deployment wizard, confirm the advanced configuration settings of services.

    ImportantIf you decide to reconfigure networking or volume usage for services, you must do so now, before deploying the system.
    When you have confirmed the configuration settings, click Continue.
  6. On the last page of the deployment wizard, to deploy the cluster, click Deploy Cluster.

    If your network configuration results in a port collision, deployment stops and the deployment wizard notifies you which port is at issue. If this happens, edit the port numbers and try again.After a brief delay, the deployment wizard displays the message "Starting deployment" and instances of services are started.
  7. When the deployment wizard is finished, it displays the message "Setup Complete." Click Finish.

    The HCP for cloud scale Applications page opens.

    HCP for cloud scale Applications page, showing the applications you can choose from

Results

Service instances are deployed and you can now configure storage components.
NoteIf you configured the System services networking incorrectly, the System Management application might not appear as an option on the Applications page. This can happen, for example, if the network.config file is not identical on all instances. For error information, view the file install_path/hcpcs/config/cluster.config or the output information logged by the script run.

To fix this issue, do the following:

  1. Stop the script run. You can do this using whatever method you're currently using to run the script.
  2. Run this command to stop all HCP for cloud scale Docker containers on the instance:

    sudo install_path/hcpcs/bin/stop

  3. Delete the contents of the folder install_path/hcpcs from all instances.
  4. Delete any Docker volumes created during the installation:

    docker volume rm volume-name

  5. Begin the installation again from the step where you unpack the installation package.
NoteThe following messages indicate that the deployment process failed to initialize a Metadata Gateway service instance:
  • If the deployment process repeatedly tries and fails to reach a node, it displays this message: "Failed to initialize all MetadataGateway instances. Please re-deploy the system."
  • If the deployment process detects an existing Metadata Gateway partition on a node, it displays this message: "Found existing metadata partitions on nodes, please re-deploy the system."
If you see either message, you can't resolve the issue by clicking Retry. Instead, you must reinstall the HCP for cloud scale software.

Optional: Configure networks for services

To change networking settings for the HCP for cloud scale services:

Procedure

  1. On the Advanced Configuration page, select the service to configure.

  2. On the Network tab:

    1. Configure the ports that the service should use.

      NoteIf you reconfigure service ports, make sure that each port value you assign is unique across all services, both System services and HCP for cloud scale services.
    2. For each service, specify the network, either Internal or External, to which the service should bind.

      NoteBy default, the HCP for cloud scale services have the External network selected and the System services have the Internal network selected.

      If you're only using a single network, you can leave these settings as they are. This is because all system instances are assigned both internal and external IP addresses in HCP for cloud scale; if you're only using a single network type, the internal and external IP addresses for each instance are identical.

Optional: Configure volumes for services

To change volume usage:

Procedure

  1. On the Advanced Configuration page, select a service to configure.

  2. Click the Volumes tab. This tab displays the system-managed volumes that the service supports. By default, each built-in service has both Data and Log volumes.

  3. For each volume, provide Docker volume creation information:

    1. In the Volume Driver field, specify the name of the volume driver that the volume should use. To configure the volume not to use any volume driver, specify bind-mount, which is the default setting.

      NoteVolume drivers are provided by Docker and other third-party developers, not by the HCP for cloud scale system itself. For information on volume drivers, their capabilities, and their valid configuration settings, see the applicable Docker or third-party developer's documentation.
    2. In the Volume Driver Options section, in the Option and Value fields, specify any optional parameters and their corresponding values for the volume driver:

      • If you're using the bind-mount setting, you can edit the value for the hostpath option to change the path where the volume's data is stored on each system instance. However, this must be a path within the HCP for cloud scale installation folder.
      • If you're using a volume driver:
        1. Click the trashcan icon to remove the default hostpath option. This option applies only when you are using the bind-mount setting.
        2. Type the name of a volume driver option in the Option field. Then type the corresponding parameter for that option in the Value field.
        3. Click the plus-sign icon to add the option/value pair.
        4. Repeat this procedure for each option/value pair you want to add.

      Option/value pairs can specify where data is written to in each volume. These considerations apply:

      • Each service instance must write its data to a unique location. A unique location can be a file system or a unique path on a shared external storage server.

        In this illustration, green arrows show acceptable configurations and red arrows show unacceptable configurations where multiple service instances are writing to the same volume, or multiple volumes are backed by the same storage location:

        GUID-FB71E293-1CF8-44F3-9832-8D508752EA7C-low.png

      • For persistent (that is, non-floating) services, favor using the ${container_inst_uuid} variable in your option/value pairs. For persistent services, this variable resolves to a value that's unique to each service instance.

        This is especially useful if the volume driver you're using is backed by a shared server. By providing a variable that resolves to a unique value, the volume driver can use the resolved variable to create unique directories on the shared server.

        However, some volume drivers, such as Docker's local volume driver, do not support automatic Folder creation. If you're using such a volume driver, you need to create volume folders yourself. For an example of how to handle this, see the following Docker local volume driver example.

      • Floating services do not support volumes that are backed by shared servers, because floating services do not have access to variables that resolve to unique values per service instance.
      • Make sure the options and values you specify are valid. Options or values that are not valid can cause system deployment to fail or volumes to be set up incorrectly. For information on volumes, see the volume driver's documentation.
      TipCreate test volumes by use the command docker volume create with your option/value pairs. Then, to test the volumes you created, use the command docker run hello-world --volume.

      You can include these variables when configuring volume options:

      • ${install_dir} is the product installation folder.
      • ${data_dir} is equal to ${install_dir}/data
      • ${log_dir} is equal to ${install_dir}/log
      • ${volume_def_name} is the name of the volume you are configuring.
      • ${plugin_name} is the name of the underlying service plugin.
      • ${container_inst_uuid} is the UUID for the Docker container in which the service instance runs. For floating services, this is the same value for all instances of the service.
      • ${node_ip} is the IP address for the system instance on which the service is running. This cannot be used for floating services.
      • ${instance_uuid} is the UUID for the system instance. This cannot be used for floating services. For services with multiple types, this variable resolves to the same value for all instances of the service, regardless of their types.
  4. Repeat this procedure for each service that you want to configure.

bind-mount configuration for Database service log volume

The built-in Database service has a volume called log, which stores the service's logs. The log volume has this default configuration:

  • Volume driver: bind-mount
  • Option: hostname, Value: ${log_dir}/${plugin_name}/${container_inst_uuid}

With this configuration, after the system is deployed, logs for the Database service are stored at a unique path on each system instance that runs the Database service:

install_path/hcpcs/log/com.hds.ensemble.plugins.service.cassandra/service-instance-uuid

Docker local volume driver for Database service log volume

Alternatively, you can configure the Database service to use Docker's built-in local volume driver to store logs on an NFS server. To do this:

  1. Log in to your NFS server.
  2. Create a folder.
  3. Within that folder, create one folder for each of the instances in your system. Name each one using the instance IP address.
    NoteIn this example, you need to create these folders yourself because the local storage driver will not create them automatically.
  4. Back in the system deployment wizard, in the Volume Driver field, specify local
  5. Specify these options and values:
    OptionValue
    typenfs
    oaddr= nfs-server-ip,rw
    device:/path-to-folder-from-step-ii/${node_ip}

    With this configuration, each instance of the Database service stores its logs in a different folder on your NFS server.

Deploying the system using CLI commands

As an alternative to using the service deployment wizard, you can use CLI commands to deploy service instances onto all instances of the system.

Before you begin

These procedures require local access or the ability to establish an SSH session to the system.

To deploy the HCP for cloud scale system:

Procedure

  1. Log in to an HCP for cloud scale instance.

  2. Go to the install_path/cli/admin folder.

    cd /opt/hcpcs/cli/admin
  3. Use the command setupAdminUser to set the password for the main admin account:

    ./admincli -k false -c setupAdminUser --pm-password password
    ImportantDo not lose or forget this password.
  4. Use the command editSecuritySettings to set the cluster host name.

    ./admincli -c editSecuritySettings --ssm-cluster-hostname=cluster_name -u admin -p passwordType a lowercase ASCII FQDN.Omitting this step can cause links in the System Management application to function incorrectly.
  5. Use the command queryServices to display the default configuration values, and save the output to a file:

    ./admincli -c queryServices --sqrm-is-recommend true --sqrm-requested-details serviceInstances, config --sqrm-service-types product -u admin -p password > /file_path/config_filename.txtAn example of a configuration file location and name is /tmp/default_config.txt.
  6. Optional: If needed, use a text editor to modify the configuration file config_filename.txt.

  7. Use the command updateServiceConfig to start deployment using the values in the configuration file:

    ./admincli -c updateServiceConfig --service-update-model /file_path/config_filename.txt -u admin -p password
    NoteIf a port is already in use this step fails and an error message is displayed listing the ports in use. Edit the configuration file to change the port and repeat this step.
  8. Use the command listScaleTasks to monitor the progress of deployment until all services are deployed "status" is "Complete"):

    ./admincli -c listScaleTasks -u admin -p password
    TipYou can focus on the status messages with a command such as this:

    ./admincli -c listScaleTasks -u admin -p password | grep status

    NoteIf this step fails, log in to the HCP for cloud scale system using a browser; the service deployment wizard is displayed. Click Retry.
  9. Use the command setupComplete to finalize deployment:

    ./admincli -c setupComplete -u admin -p password
    NoteIf this step fails with the message Must be in state "setup" to complete setup, wait for a few seconds and repeat this step.

Create an owner for new files

After installation, create a user as owner of the newly installed files.

The files installed for HCP for cloud scale are created with an owner universally unique ID (UUID) of 10001. It's best for all files to have a valid owner, so you should create a user account (such as hcpcs) with a UUID of 10001 to own the files.

CautionDo not try to change the file owner to the UUID of an existing user.

To create a file owner:

Procedure

  1. Create the user account by typing the command sudo useradd -u 10001 account

    where account is the name of the user account (for example, hcpcs).
  2. Verify the user account by typing the command id -u account

    The system displays the user account UUID.
  3. Add a password to the user account by typing the command sudo passwd account

    It's best to use a strong password.
    1. When prompted, type the user account password.

    2. When prompted, confirm the user account password.

Results

You have created a user account that owns the HCP for cloud scale files.

Optional: Verify the created volumes

Before you begin

If you configured the service volumes to use volume drivers, use these commands to list and view the Docker volumes created on all instances in the system:

docker volume ls

docker volume inspect volume_name

If volumes were created incorrectly, you need to redo the system installation:

Procedure

  1. Stop the run script from running. You do this using whatever method you're currently using to run the script.

  2. Stop all HCP for cloud scale Docker containers on the instance:

    sudo install_path/hcpcs/bin/stop
  3. Delete the contents of the folder install_path/hcpcs from all instances.

  4. Delete any Docker volumes created during the installation:

    docker volume rm volume_name
  5. Begin the installation again from the point where you unpack the installation package.

Optional: Distribute services among system instances

By default, when you install and deploy a multi-instance system, the system automatically runs each service (except Dashboard) on its normal number of instances.

However, if you've installed more than four instances, some instances may not be running any services at all. As a result, these instances are under-used. You should manually distribute services to run across all instances in your system.

Moving and scaling floating services

For floating services, instead of specifying the specific instances on which the service runs, you can specify a pool of eligible instances, any of which can run the service.

Moving and scaling services with multiple types

When moving or scaling a service that has multiple types, you can simultaneously configure separate rebalancing for each type.

Best practices

Here are some guidelines for distributing services across instances:
  • Avoid running multiple services with high service unit costs together on the same instance.
  • On master instances, avoid running any services besides those classified as System services.

Considerations

  • Instance requirements vary from service to service. Each service defines the minimum and maximum number of instances on which it can run.
  • You cannot remove a service from an instance if doing so causes or risks causing data loss.
  • Service relocation might take a long time to complete and can impact system performance.

Troubleshooting

You might encounter these issues during installation.

Service doesn't start

Rarely, a system deployment, service management action, or system update fails because a service fails to start. When this happens, the System Management application is inaccessible from the instance where the failure occurred.

The logs in the watchdog-service log folder contain this error:

Error response from daemon: Conflict. The name "service-name" is already in use by container Docker-container-id. You have to remove (or rename) that container to be able to reuse that name.

To resolve this issue, restart the Docker service on the instance where the service failed to start. For example, if you are using systemd to run Docker, run:

systemctl restart docker

After restarting Docker, try the system deployment, service management action, or system update again.

Relocating services

To manually relocate a service, in the Admin App:

Procedure

  1. Select Services.

    The Services page opens, displaying the services and system services.
  2. Select the service that you want to scale or move.

    Configuration information for the service is displayed.
  3. Click Scale, and if the service has more than one type, select the instance type that you want to scale.

  4. The next step depends on whether the service is floating or persistent (non-floating).
  5. If the service is a floating service, you are presented with options for configuring an instance pool. For example:Screen capture of Scale tab for a floating service showing the instance pool

    1. In the box Service Instances, specify the number of instances on which the service should be running at any time.

    2. Configure the instance pool:

      • For the service to run on any instance in the system, select All Available Instances.

        With this option, the service can be restarted on any instance in the instance pool, including instances that were added to the system after the service was configured.

      • For the service to run on a specific set of instances, clear All Available Instances. Then:
        • To remove an instance from the pool, select it from the list Instance Pool, on the left, and then click Remove Instances.
        • To add an instance to the pool, select it from the list Available Instances, on the right, and then click Add Instances.
  6. If the service is a persistent (non-floating) service, you are presented with options for selecting the specific instances that the service should run on. Do one or both of these, then click Next:Screen capture of Scale tab for a persistent service showing the Selected Instances and Available Instances lists

    • To remove the service from the instances it's currently on, select one or more instances from the list Selected Instances, on the left, and then click Remove Instances.
    • To add the service to other instances, select one or more instances from the list Available Instances, on the right, and then click Add Instances.
  7. Click Update.

    The Processes page opens, and the Service Operations tab displays the progress of the service update as "Running." When the update finishes, the service shows "Complete."

Next steps

After reconfiguration, the service might take a few minutes to appear on the Services page.

Configure the system for your users

After your system is up and running, you can begin configuring it for your users.

For information about these procedures, see the Administration Guide or the online help that's available from the HCP for cloud scale application.

The overview of tasks is:

Procedure

  1. Configure the connection to an IdP and create user accounts.

  2. Define storage components.

  3. Assign a name for your HCP for cloud scale cluster.

    The host name is required for access to the System Management application and the S3 API.
  4. Configure DNS servers to resolve both the fully qualified domain name for your cluster and the wildcard *.hcpcs_cluster_name.

  5. Update Secure Socket Layer (SSL) certificates for the system, storage components, or synchronized buckets.

  6. If your system uses encryption, enable it.

  7. Obtain S3 authorization credentials.

 

  • Was this article helpful?