Service list
The table below describes the services that your system runs. For each service, the table lists:
- Configuration settings: The settings you can configure for the service. For more information, see Configuring service settings.
- RAM needed per instance: The amount of RAM that, by default, the service needs on each instance on which it's deployed. For all services except for System services and Workflow-Agent, this value is also the default Docker Container Memory value for the service.
- Number of instances: Shows both:
- The required number of instances on which a service must run for the system to function properly.
- The recommended number of instances that you should run a service on. These are recommended minimums; if your system includes more instances, you should take advantage of them by running services on them.
- Service unit cost per instance: The number of service units that it costs to run the service on one instance. This cost indicates how computationally expensive one service is compared to another.For more information, see Service units.
- Whether the service is persistent or supports floating.
For information, see Floating services.
- Whether the service has a single type or multiple.
For information, see Services with multiple types.
Service Name and Description | Configuration settings | Properties | |
Product services: The services perform functions related to the system's supported use cases. You can move, scale, and reconfigure these services. | |||
Dashboard https://www.elastic.co/products/kibana Visualizes information stored in Elasticsearch indexes. How it's used Powers the advanced Dashboard Management service. NoteThis service is in the Unconfigured state by default.
|
Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options
| RAM needed per instance | 300 MB |
Number of instances |
| ||
Service unit cost per instance | 5 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Database Decentralized database that can be scaled across large numbers of hardware nodes. How it's used Stores system configuration data. Also stores document discovery and failure data for workflow tasks. |
Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options
Advanced Options WARNINGChanges to these settings affect the Database service. Please use with caution.
| RAM needed per instance | 2.4 GB |
Number of instances |
| ||
Service unit cost per instance | 10 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Index http://lucene.apache.org/solr/ Data indexing and search platform. How it's used The search engine that manages all internal search indexes. | Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options Heap size: The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 1800m.Solr Health Monitoring Options: The config options used to monitor the Solr service.
Settings for each index:
| RAM needed per instance | 2 GB |
Number of instances |
Notes
| ||
Service unit cost per instance | 25 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Logging https://www.elastic.co/products/logstash Collection engine for event data. Can perform transformations on the data it collects and then send that data to a number of outputs. How it's used Transports system logs and metrics data to the Metrics service. |
Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options
| RAM needed per instance | 700 MB |
Number of instances |
| ||
Service unit cost per instance | 10 | ||
Persistent or floating | Floating | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Message Queue Stream processing platform for handling real-time data streams. | Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
| RAM needed per instance | 2 GB |
Number of instances |
| ||
Service unit cost per instance | 5 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Metrics Data indexing and search platform. |
Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options
NoteFor the Metrics service, the Container Memory setting should be at least 500MB larger than the JVM Heap size.
| RAM needed per instance | 2000 MB |
Number of instances |
| ||
Service unit cost per instance | 25 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Monitor-App Powers the Monitor App. |
Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options Max Heap Size: The amount of memory to allocate for the Java heap for each instance of the service. Valid values for this setting are integers representing a number of bytes. You can optionally specify suffixes of k (for kilobytes), m (for megabytes), or g (for gigabytes). The default is 256m. Syslog Queue Settings The Monitor-App service uses an Apache Kafka message queue to collect syslog messages sent by the sources that it monitors. Syslog messages in this queue are processed, indexed, and displayed in visualizations in HCM. Messages are automatically deleted from the queue if they grow too old or the queue too large.
ImportantMessages deleted from the queue may never have been displayed in HCM.
You can configure these settings for the queue: Retention days: The maximum number of days that a message can remain in the queue. The service automatically removes messages older than this limit. The default is 30 days. Max queue size: The maximum size that the queue can grow to, in megabytes. When the queue reaches this size, old messages are automatically removed. The default is -1 (no size limit).
NoteYou need to specify a positive value for either of the two previous settings. If you specify a positive value for both, the value for the Max queue size setting is used and the Retention days setting is ignored.
Metric Settings monitorAccessLogIndex retention days: The amount of time to retain the indexed metrics in monitorAccessLogIndex, used for generating visualizations. That is, the Monitor App cannot display information about the systems it monitors if that information is older than this limit. The default is 30 days. monitorMetricsIndex retention days: The amount of time to retain the indexed metrics in monitorMetricsIndex, used for generating visualizations. That is, the Monitor App cannot display information about the systems it monitors if that information is older than this limit. The default is 365 days. monitorEventsIndex retention days: The amount of time to retain the indexed metrics in monitorEventsIndex, used for generating visualizations. That is, the Monitor App cannot display information about the systems it monitors if that information is older than this limit. The default is 365 days. prometheus index Retention Days: The amount of time to retain the indexed metrics in the Prometheus index, used for generating visualizations of an HCP for Cloud Scale system. Metrics older than this limit are deleted in the Monitor App. The default is 30 days. Anomaly Detection Anomaly Detection Run Interval: How often your system runs the Anomaly Detection algorithm, measured in hours. The default is 2 hours. Anomaly Detection Model Archive Interval: How often your system archives Anomaly Detection models, measured in hourshours. The default is 5 hourshours. Anomaly Detection Model Retention: How long your system keeps old Anomaly Detection models before deleting them, measured in days. The default is 1 day. Anomaly Detection Training Days: How often your system retains its data, measured in days. The default is 30 days. Forecasting Forecasting Job Interval: How often your system runs the Forecasting job, measured in hours. The default is 24 hours. Forecasting Model Archive Interval: How often your system archives Forecasting models, measured in hours. The default is 5 hours. Forecasting Model Retention: How long your system keeps old Forecasting models before deleting them, measured in days. The default is 1 day. Forecasting Model Max Training: The maximum number of months of historical data to use when training Forecasting models. The default is 36 months. Forecasting Horizon: How far into the future the Forecasting model predicts, measured in days. The default is 365 days. Forecasting Confidence Cutoff: How wide the cone of certainty can be, measured in points of standard deviation. The default is 6 points. Forecasting Historical Value Type: Choose to have the system store the mean of all values for each day, the actual value, or none of the values. The default is Mean. | RAM needed per instance | 556 MB |
Number of instances |
NoteScaling the Monitor-Appservice does not affect any of the workflows that collect data from the systems you are monitoring. For example, if you scale the service to run on 0 instances, users will be unable to access the Monitor App, but HCI will continue to collect data.
| ||
Service unit cost per instance | 10 | ||
Persistent or floating | Floating | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Scheduling https://mesos.github.io/chronos/ Job scheduler for Apache Mesos. |
Docker Container Options Each service runs within its own Docker container. You can configure these settings for this service's container:
Service-Specific Options
| RAM needed per instance | 712 MB |
Number of instances |
| ||
Service unit cost per instance | 1 | ||
Persistent or floating | Floating | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
System services: The services below manage system resources and ensure that the system remains available and accessible. These services cannot be moved or reconfigured. | |||
Admin-App Runs the Admin App. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 10 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Cluster-Coordination Mesos (master) - mesos.apache.org Hardware resource management solution for distributed systems. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 1 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | No | ||
Single or multiple types | Single | ||
Cluster-Worker Mesos (slave) - mesos.apache.org Hardware resource management solution for distributed systems. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 5 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Network-Proxy HAProxy - haproxy.org Load balancer for TCP and HTTP-based applications. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 1 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Sentinel Runs internal system processes and monitors the health of the other services. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 5 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Service-Deployment Marathon - https://mesosphere.github.io/marathon/ Orchestration platform for Mesos applications. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 1 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Synchronization Apache Zookeeper - https://zookeeper.apache.org/ Coordinates configuration settings and other information between a number of distributed services. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 5 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single | ||
Watchdog Monitors the other System Services and restarts them if necessary. Also responsible for initial system startup. | N/A | RAM needed per instance | N/A |
Number of instances | N/A | ||
Service unit cost per instance | 5 | ||
Persistent or floating | Persistent | ||
Supports volume configuration | Yes | ||
Single or multiple types | Single |
Clustered-File-System service considerations
The Clustered-File-System service gives storage space to other services in an HCI system.
Each instance of the Clustered-File-System service can be one of these types:
- Data Node: Stores data
- Name Node: Tracks the data stored in Data Nodes
- Journal Node: Tracks changes made by the Name Nodes
You can run multiple different Clustered-File-System service instance types on an HCI instance. For example, a single HCI instance can run both a Journal Node and a Data Node instances of the Clustered-File-System service.
You cannot run multiple Clustered-File-System service instances of the same type on an HCI instance. For example, a single HCI instance cannot run two Data Node instances.
The Clustered-File-System service can be deployed in either of these modes:
- High Availability (HA) Mode: In HA mode, the Clustered-File-System service can retain its stored data in the event that a Name Node instance fails or becomes corrupt.
- Non-HA Mode: In Non-HA mode, the Clustered-File-System service has a single Name Node instance. If this instance fails or becomes corrupt, all data stored by the Clustered-File-System service is lost.
The service's deployment mode is determined based on the number of service type instances you have when you initially deploy the service. After the deployment mode is set, you cannot change it without losing all data stored by the Clustered-File-System service.
To deploy the Clustered-File-System service in HA mode, you need at least three HCI instances.
Upon initial deployment of the service, you need to configure the service to run:
- Exactly two Name Node instances
- Exactly three Journal Node instances
You can run any number of Data Node instances. You need at least one to be able to store data. For HA mode, the recommended minimum is three.
To deploy the Clustered-File-System service in Non-HA mode, you need only one HCI instance.
Upon initial deployment of the service, you need to configure the service to run:
- Exactly one Name Node instance
- Zero Journal Node instances
You can run any number of Data Node instances. You need at least one to be able to store data.
When scaling or moving Name Node or Journal Node service instances, you need to perform only one operation at time.
For example, to move a Name Node from one HCI instance to another, you need to:
- Run a service operation to remove an instance of the Name Node service type.
- Wait for the operation to finish.
- Run a service operation to start a new instance of the Name Node service type.
- Wait for the operation to finish.
When running the Clustered-File-System service in HA mode, you need at least two Journal Node instances running. Scaling the service such that you have zero or one Journal Node instances causes the service to become unresponsive.
You can scale or move as many Data Node instances as necessary.
When you initially deploy the Clustered-File-System service, it is deployed in either HA or Non-HA mode, depending on the number of service instances types you initially configure the service to run.
You can change the Clustered-File-System deployment mode after the service is initially deployed, however this needs a complete redeployment of the service.
To switch from one deployment mode to another:
- Scale Clustered-File-System to run on zero instances in the system.
- Configure and run a service scale task to scale Clustered-File-System to the
applicable number of service type instances. ImportantName Node and Journal Node types must be scaled together in a single operation.