Services
Services perform functions essential to the health or functionality of the system. For example, the Metrics service stores and manages system events, while the Watchdog service ensures that other services remain running.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
Services are grouped into these categories depending on what actions they perform:
•Product services — Enable product functionality. For example, the Index service performs functions that allow the system to be used to search for data. You can scale, move, and reconfigure these services.
•System services — Maintain the health and availability of the system. You cannot scale, move, or reconfigure these services.
Some System services run only on master instances. See About master and worker instances.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
Some services are classified as applications. These are the services with which users interact. Services that are not applications typically interact only with other services.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
Services run on instances in the system. Most services can run simultaneously on multiple instances. That is, you can have multiple instances of a service running on multiple instances in the system. Some services run on only one instance.
Each service has a recommended and required number of instances on which it should run. For information, see Service list.
You can configure where Product services services run, but not System services.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
Some services can have multiple service instance types. That is, a service can run on two system instances, but those two service instances can perform different functions from one another.
For example, the Clustered-File-System service, which provides other services with locations for storing data, has three instance types: Name Node, Journal Node, and Data Node.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
If a service supports floating, you have flexibility in configuring where new instances of that service are started when service instances fail.
Non-floating (or persistent) services run on the specific instances that you specify. If one of those service instances fails, the system does not automatically bring up a new instance of that service on another system instance.
With a service that supports floating, you specify a pool of eligible system instances and the number of service instances that should be running at any time. If a service instance fails, the system brings up another one on of the system instances in the pool that doesn't already have an instance of that service running.
For services with multiple types, the ability to float can be supported on a per-type basis.
For information on changing the instance pool for a floating service, see Moving and scaling services.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
Each service binds to a number of ports and to one type of network, either internal or external. Networking for each service is configured during system installation and cannot be changed once a system is running. For more information, see Networking.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
Each service costs a certain number of service units to run. This cost indicates how computationally expensive one service is compared to another. Your system license limits the number of service units that you can run.
Service unit costs apply for each instance of a service that's running. For example, say that a service has a cost of one service unit and is running on three instances in the system. In this case, the service counts for three service units against your licensed limit.
Recommended service unit limitsThe system makes recommendations on the maximum number of service units that you should run on each instance. An instance that runs more than the recommended number of service units in use is likely to experience decreased performance.
The recommended service unit limits are based on whether an instance meets the recommended hardware requirements:
•If an instance meets the recommended hardware requirements, you can run up to 180 service units on that instance.
•If an instance does not meet the recommended hardware requirements, you can run up to 100 service units on that instance.
For information on recommended hardware requirements, see Hardware resources.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Service list
The table below describes the services that your system runs. For each service, the table lists:
•Configuration settings — The settings you can configure for the service. For more information, see Configuring service settings.
•RAM needed per instance — The amount of RAM that, by default, the service needs on each instance on which it's deployed. For all services except for System services and Workflow Agent, this value is also the default Docker Container Memory value for the service.
• Number of instances — Shows both:
oThe required number of instances on which a service must run for the system to function properly.
oThe recommended number of instances that you should run a service on. These are recommended minimums; if your system includes more instances, you should take advantage of them by running services on them.
•Service unit cost per instance —The number of service units that it costs to run the service on one instance. This cost indicates how computationally expensive one service is compared to another.
For more information, see Service units.
•Whether the service is persistent or supports floating.
For information, see Floating services.
•Whether the service has a single type or multiple.
For information, see Services with multiple types.
Note: For services with both the Container Memory and Max Heap Size settings, the Container Memory setting should be larger than the Max Heap Size setting. |
Service Name and Description |
Configuration settings |
---|---|
Clustered-File-System https://hortonworks.com/apache/hdfs Hadoop Distributed File System. A distributed file system used for storing data. Important: The Clustered-File-System service is offered as a technology preview. Do not use it on a production system. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() Heap size — The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The defaults differ for each service instance type: •Name Node — 1024.0 •Data Node — 751.0 •Journal Node — 256.0 |
Dashboard https://www.elastic.co/products/kibana Visualizes information stored in Elasticsearch indexes. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() •Node options — A list of Node.js configuration options to send to the Dashboard service. |
Database Decentralized database that can be scaled across large numbers of hardware nodes. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() •Max Heap Size — Maximum amount of memory to allocate to the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 1800m. •Heap New Size — The size of the young generation within the Java heap for the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 512m |
Index http://lucene.apache.org/solr/ Data indexing and search platform. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() •Heap size — The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 1800m. •Settings for each index: oRebalance shards — If enabled, the index shards are automatically rebalanced across the system instances that run the Index service. oIndex protection level — The total number of copies to create for the index. A value of 1 means the index has no replicas, and therefore no redundancy. The maximum value you can specify is equal to the number of instances that run the Index service. For more information, see Index protection level. |
Logging https://www.elastic.co/products/logstash Collection engine for event data. Can perform transformations on the data it collects and then send that data to a number of outputs. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() Heap settings — The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 512m. |
Stream processing platform for handling real-time data streams. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() Heap settings — The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are any valid Java heap settings. The default is: -Xmx1800m -Xms512m |
Metrics Data indexing and search platform. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() •Heap size — The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 1024m. •For each internal index that this service manages, you can configure: oIndex protection level — The number of additional replicas to maintain for each index shard. The total number of replicas for each shard is equal to this setting plus 1. For example, a value of 3 means that the index has 4 total copies of each shard. The maximum value you can specify is equal to the number of instances in the system minus 1. Note: For the Metrics service, the Container Memory setting should be at least 500MB larger than the JVM Heap size. |
Scheduling https://mesos.github.io/chronos/ Job scheduler for Apache Mesos. |
![]() Each service runs within its own Docker container. You can configure these settings for this service's container: •Container Memory — The hard memory limit for the service's Docker container, in megabytes (MB). •CPU — The relative CPU usage weight for the service's Docker container. Generally, a higher value means that the container receives more CPU resources than other processes (including other service Docker containers) running on the instance. ![]() Heap settings — The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default value is 512m. |
Admin-App Runs the Administration App. |
N/A |
Cluster-Coordination Mesos (master) - mesos.apache.org Hardware resource management solution for distributed systems. |
N/A |
Cluster-Worker Mesos (slave) - mesos.apache.org Hardware resource management solution for distributed systems. |
N/A |
Network-Proxy HAProxy - haproxy.org Load balancer for TCP and HTTP-based applications. |
N/A |
Sentinel Runs internal system processes and monitors the health of the other services. |
N/A |
Service-Deployment Marathon - https://mesosphere.github.io/marathon/ Orchestration platform for Mesos applications. |
N/A |
Synchronization Apache Zookeeper - https://zookeeper.apache.org/ Coordinates configuration settings and other information between a number of distributed services. |
N/A |
Watchdog Monitors the other System Services and restarts them if necessary. Also responsible for initial system startup. |
N/A |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Clustered-File-System service considerations
The Clustered-File-System service provides storage space to other services.
Important: The Clustered-File-System service is offered as a technology preview. Do not use it on a production system. |
Multiple service instance types
Each instance of the Clustered-File-System service can be one of these types:
•Data Node — Stores data.
•Name Node — Tracks the data stored in Data Nodes.
•Journal Node — Tracks changes made by the Name Nodes.
You can run multiple different Clustered-File-System service instance types on a system instance. For example, a single system instance can run both a Journal Node and a Data Node instances of the Clustered-File-System service.
You cannot run multiple Clustered-File-System service instances of the same type on a system instance. For example, a single system instance cannot run two Data Node instances.
Deployment modes
The Clustered-File-System service can be deployed in either of these modes:
•High Availability (HA) Mode — In HA mode, the Clustered-File-System service can retain its stored data in the event that a Name Node instance fails or becomes corrupt.
•Non-HA Mode — In Non-HA mode, the Clustered-File-System service has a single Name Node instance. If this instance fails or becomes corrupt, all data stored by the Clustered-File-System service is lost.
The service's deployment mode is determined based on the number of service type instances you have when you initially deploy the service. Once the deployment mode is set, you cannot change it without losing all data stored by the Clustered-File-System service.
Initial deployment requirements: HA mode
To deploy the Clustered-File-System service in HA mode, you need at least 3 system instances.
Upon initial deployment of the service, you need to configure the service to run:
•Exactly 2 Name Node instances
•Exactly 3 Journal Node instances
You can run any number of Data Node instances. You need at least 1 to be able to store data. For HA mode, the recommended minimum is 3.
Initial deployment requirements: Non-HA mode
To deploy the Clustered-File-System service in Non-HA mode, you need only 1 system instance.
Upon initial deployment of the service, you need to configure the service to run:
•Exactly 1 Name Node instances
•Zero Journal Node instances
You can run any number of Data Node instances. You need at least 1 to be able to store data.
Scaling and moving the service
Caution: Scaling Clustered-File-System such that you have zero Name Node instances causes the Clustered-File-System service to lose track of all data stored on its Data Node instances. |
•When scaling or moving Name Node or Journal Node service instances, you need to perform only one operation at time.
For example, to move a Name Node from one system instance to another, you need to:
a.Run a service operation to remove an instance of the Name Node service type.
b.Wait for the operation to finish.
c.Run a service operation to start a new instance of the Name Node service type.
d.Wait for the operation to finish.
•When running the Clustered-File-System service in HA mode, you need to have at least 2 Journal Node instances running at all times. Scaling the service such that you have zero or 1 Journal Node instances causes the service to become unresponsive.
•Avoid scaling or moving Clustered-File-System service instances if you have a currently running workflow that has a Preprocessing pipeline.
•There are no restrictions on the number of Data Node instances that you can scale or move at one time.
Changing deployment modes
When you initially deploy the Clustered-File-System service, it is deployed in either HA or Non-HA mode, depending on the number of service instances types you initially configure the service to run.
You can change the Clustered-File-System deployment mode after the service is initially deployed, however this requires a complete redeployment of the service.
Caution: Changing deployment modes causes the Clustered-File-System service to lose track of all data stored on its Data Node instances. Because of this, you should remove all existing Data Node instances and create new ones when changing the Clustered-File-System deployment mode. |
To switch from one deployment mode to another:
1.Scale Clustered-File-System to run on zero instances in the system.
For information on scaling services, see Moving and scaling services.
2.Configure and run a service scale task to scale Clustered-File-System to the applicable number of service type instances.
Important: Name Node and Journal Node types must be scaled together in a single operation. |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Viewing services
You can use Administration App, CLI, and REST API to view the status of all services for the system.
Related topics:
Administration App instructions
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
To view the status of all services, in the Administration App, click on Monitoring > Dashboard > Services.
For each service, the page shows:
•The service name
•The service state. One of these:
oHealthy — The service is running normally.
oUnconfigured — The service has yet to be configured and deployed.
oDeploying — The system is currently starting or restarting the service. This can happen when:
–You move the service to run on a completely different set of instances.
–You repair a service.
For information on viewing the status service operations, see Monitoring service operations.
oBalancing — The service is running normally, but performing some background maintenance operations.
oUnder-protected — In a multi-instance system:
–One or more of the instances on which a service is configured to run are offline.
–The service is configured to run on fewer than the recommended number of instances. For information, see Service list.
oFailed — The service is not running or the system cannot communicate with the service.
•Service Units — The total number of service units currently being spent to run this service. This value is equal to the service's service unit cost times the number of instances on which the service is running. For more information, see Service units.
•Avg CPU Usage — The current percentage CPU usage for the service across all instances on which it's running.
•Memory — The current RAM usage for the service across all instances on which it's running.
•Disk Used — The current total amount of disk space that the service is using across all instances on which it's running.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
To view the detailed status for an individual service, click on the service on the Monitoring > Dashboard > Services page.
In addition to the information above, the page shows:
•Service unit cost — The number of service units required to run the service on one instance. For more information, see Service units.
•Service Instance Types — For services that have multiple types, the types that are currently running.
•Instances — A list of all instances on which the service is running.
•Instance Pool — For floating services, the instances that this service is eligible to run on. For more information, see Floating services.
•Network: [Internal|External] — Which network type this service uses to receive communications.
This section also displays a list of the ports that the service uses.
For more information, see Networking.
•Events — A list of all system events for the service
Related CLI command(s)
getService
listServices
For information on running CLI commands, see CLI reference.
Related REST API method(s)
GET /services
GET /services/c8ca9d05-a3e5-43fe-b1de-bc0e3f8e38f3
For information on specific REST API methods, in the Administration App, click on the help icon (). Then:
•To view the administrative REST API methods, click on Admin API.
•To view the API methods used for performing searches, click on Search API.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Managing services
This section describes how you can reconfigure, restart, and otherwise manage the services running on your system.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Moving and scaling services
You can change a service to run on:
•Additional instances (for example, if you need improved service performance and availability)
•Fewer instances (for example, if you want to free up resources on an instance for running other services)
•A different set of instances (for example, if you are retiring the piece of hardware on which an instance is installed)
Important: Before attempting to scale the Clustered-File-System service, see Clustered-File-System service considerations. |
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
For floating services, instead of specifying the specific instances on which the service runs, you can specify a pool of eligible instances, any of which can run the service. For more information, see Floating services.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
When moving or scaling a service that has multiple types, you can simultaneously configure separate rebalancing operations for each type.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
•Moving or scaling services can cause document failures during a workflow task. Before moving or scaling a service, you should either pause all running workflow tasks or wait for them to complete.
•Avoid running multiple services with high service unit costs together on the same instance.
Ideally, each of these services should run by itself on an instance:
oDatabase
oIndex
•On master instances, avoid running any services besides those classified as System services.
•To utilize your instances evenly, try to deploy a comparable number of service units on each instance. For more information, see Service units.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
•You cannot remove a service from an instance if doing so would cause or risk causing data loss.
For example, you cannot remove an instance of the Index service if that service instance contains the only copy of one of your search indexes.
•Service relocation operations may take a long time to complete and may impact system performance while they are running.
•Instance requirements vary from service to service. Each service defines the minimum and maximum number of instances on which it can run.
For information on:
•Individual services and the number of instances they should run on, see Service list.
•Monitoring service relocation operations, see Monitoring service operations.
Tip: Use the All Available Instances option to have a floating service be eligible to run on any instance in the system, including any new instances added in the future. |
Administration App instructions
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
To manually configure a service relocation operation :
1.Click on System Configuration.
2.Click on the Services panel.
3.Click on the Manage Services button.
4.Select one of the services that you want to scale or move. Then click on the Next button.
5.Click on the Configure option. Then click on the Next button.
6.On the Scale tab, if the service has more than one type, select the instance type that you want to scale.
7.If the service is a floating service, you are presented with options for configuring an instance pool:
a.In the Service Instances field, specify the number of instances on which the service should be running at any time.
b.Configure the instance pool:
–To have the service run on any instance in the system, select the All Available Instances option.
–With this option, the service can be restarted on any instance, including instances that were added to the system after the service was configured.
–To have the service run on a specific set of instances, deselect the All Available Instances option. Then:
•To remove an instance from the pool, select it from the Instance Pool list on the left. Then click on the Remove Instances button.
•To add an instance to the pool, select it from the Available Instances list on the right. Then click on the Add Instances button.
8.If the service is a non-floating service, you are presented with options for selecting the specific instances that the service should run on. Do one or both of these:
oTo remove the service from the instances it's currently on, select one or more instances in the lefthand list. Then click on the Remove Instances button.
oTo add the service to other instances, select one or more instances from the Available Instances list on the right. Then click on the Add Instances button.
Then click on the Next button.
9.Click on the Update Service button.
![Closed](https://knowledge.hitachivantara.com/@api/deki/files/28974/transparent.gif?revision=1)
The system can check whether your services are currently deployed on the recommended minimum number of instances. If they aren't, the system can configure service scale operations for you.
To have the system do this for you:
1.Click on System Configuration.
2.Click on the Services panel.
3.Click on the Manage Services button.
4.Click on the Auto Scale option. Then click on the Next button.
The system examines your service layout to ensure that each service is running on the minimum recommended number of instances. If one or more services are not running on enough instances, the system automatically creates scale operations for them.
For example, if you the Index service running on only 1 instance, the system configures an operation to scale the service to run on 3 instances.
5.Click on the Update Service button.
Related CLI command(s)
updateServiceConfig
For information on running CLI commands, see CLI reference.
Related REST API method(s)
POST /services/configure
For information on specific REST API methods, in the Administration App, click on the help icon (). Then:
•To view the administrative REST API methods, click on Admin API.
•To view the API methods used for performing searches, click on Search API.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Configuring service settings
You can configure settings for some of the services that the system runs. For information on the settings you can configure for each service, see Service list.
Note: If you make an unwanted change to a service configuration, wait for the operation to finish before creating a new operation to correct the service configuration. |
Administration App instructions
1.Click on System Configuration.
2.Click on the Services panel.
3.Click on the Manage Services button.
4.Click on the service you want to configure. Then click Next.
5.Click on the Configure panel.
6.On the Settings tab, configure the service.
For information on the settings available for each service, see Service list.
7.Click on the Update Service button.
Related CLI command(s)
updateServiceConfig
For information on running CLI commands, see CLI reference.
Related REST API method(s)
POST /services/configure
For information on specific REST API methods, in the Administration App, click on the help icon (). Then:
•To view the administrative REST API methods, click on Admin API.
•To view the API methods used for performing searches, click on Search API.
For general information about the administrative REST API, see REST API reference.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Repairing services
If a service becomes slow, unresponsive, or shows a status of Failed, you can run a service operation to repair it. Repairing a service stops and restarts the service on each instance on which it's running.
For information on viewing service health and activity, see Monitoring services.
Important: Depending on which service you're repairing, parts of the system will be unavailable until the repair operation finishes. |
To repair a service:
1.Click on System Configuration.
2.Click on the Services panel.
3.Click on the Manage Services button.
4.Select the service you want to repair and click on the Next button.
5.Select the Repair option and click on the Next button.
6.Click on the Update Service button.
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.
Advanced services
The System Configuration > Services > Advanced Services page in the Administration App includes links to the some of the underlying technologies that your system uses.
Caution: Do not make changes on any of these pages unless you know what you're doing. Improper changes can stop your system from functioning. |
Trademarks, Legal disclaimer, Third-party software in this documentation
© 2017 Hitachi Vantara Corporation. All rights reserved.