What are hardware profiles?edit

A banner showing a several planes flying in formation

Hardware profiles optimize components of the Elastic Stack, such as Elasticsearch nodes and Kibana instances, for a number of general uses. Compared to the stand-alone installation process of deploying the Elastic Stack, profiles provide much greater flexibility and they ensure that your deployment already has the resources it needs. Profiles are also flexible: Not only can you select a profile that fits your purpose, but you can customize each component of the Elastic Stack with a just few additional clicks.

The components of the Elastic Stack that we support as part of a deployment are called instances and include:

  • Elasticsearch data, ingest, and master nodes
  • Kibana instances
  • Machine learning (ML) nodes
  • Application Performance Monitoring (APM) Server instances

To address each use case, hardware profiles combine these components of the Elastic Stack in different ways according to tried-and-true best practices that you can trust. For example: In a hot-warm architecture, which typically solves a log aggregation use case, you need at least one Elasticsearch hot node with recent data and one warm node with read-only indices for older, less frequently queried data. A real hot-warm architecture in a production environment also needs to be fault tolerant, so that it is highly available. To support these requirements, our hot-warm architecture profile includes hot and warm modes spread across two availability zones at a minimum, comes with Kibana enabled and ready to use, and even pre-wires machine learning in case you want to enable machine learning for anomaly detection later on.

When you create your deployment on Elasticsearch Service, the virtualized hardware that hosts your deployment is optimized for your specific use case. This means that the running instances of the Elastic Stack get assigned resources tailored to a workload or cluster architecture, including:

  • CPU (compute)
  • Memory
  • Storage
  • I/O

The size of an instance in a deployment is measured in GB of memory or storage, as indicated in the UI. The instance size has another important effect: Changing the memory or storage size also changes other resources in lockstep, relative to the size of the instance. For example, if you double the memory size of an instance in the high I/O profile from 16 GB to 32 GB of memory, you also double the CPU resources and storage space. Or put simply, to get more performance, increase the size of an instance.

Under the covers, our infrastructure consists of virtualized hardware resources from a cloud provider, such as Amazon EC2 or Google Compute Platform. You don’t interact with the cloud platform infrastructure layer directly on Elasticsearch Service, but we do document what we use. To learn more, see Elasticsearch Service Hardware.

I/O optimized profileedit

New to Elasticsearch or not sure yet what you need? This profile is suitable for many search and general all-purpose workloads that don’t require more specialized resources. Your Elasticsearch data nodes are optimized for high I/O throughput, and the profile is geared towards providing a balance of compute (CPU), memory, and storage resources.

Included with this profile:

Amazon Web Services (AWS)
  • Elasticsearch:

    • Data nodes: Start at 1 GB memory x 1 availability zone. The default is 8 GB memory x 2 availability zones. Hosted on AWS i3 instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on AWS r4 instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone (free). Hosted on AWS r4 instances.
  • Machine learning (ML): Disabled by default. The functionality is available in the profile, but you must explicitly enable it in the UI. 1GB of ML is free. Hosted on AWS m5 instances.
  • Application Performance Monitoring (APM): Enabled by default. The functionality is pre-wired into the profile, but you must explicitly enable it in the UI. 0.5GB RAM is free. Hosted on AWS r4 instances.
Google Cloud Platform (GCP)
  • Elasticsearch:

    • Data nodes: Start at 1 GB memory x 1 availability zone. The default is 8 GB memory x 2 availability zones. Hosted on custom I/O-optimized GCP instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on custom memory-optimized GCP instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on custom memory-optimized GCP instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom CPU-optimized GCP instances.
  • Application performance monitoring (APM): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom memory-optimized GCP instances.
Microsoft Azure
  • Elasticsearch:

    • Data nodes: Start at 1 GB memory x 1 availability zone. Hosted on L32sv2 instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eliglble set of nodes (in 3 AZs always) to offload the data nodes. Hosted on Azure E32sv3 instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on Azure E32sv3 instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on D64sv3 Azure instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on Azure E32sv3 instances.

Compute optimized profileedit

A profile to run CPU-intensive workloads faster. Alternatively, you can use this profile to run smaller workloads cost-effectively when you need less memory and storage, as CPU resources are assigned proportional to cluster size. A smaller, compute-optimized cluster can run a workload just as quickly as a larger cluster optimized for, say, storage.

Included with this profile:

Amazon Web Services (AWS)
  • Elasticsearch:

    • Data nodes: Start at 8 GB memory x 2 availability zones. Hosted on AWS m5 instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on AWS r4 instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on AWS r4 instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on AWS m5 instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on AWS r4 instances.
Google Cloud Platform (GCP)
  • Elasticsearch:

    • Data nodes: Start at 8 GB memory x 2 availability zones. Hosted on custom CPU-optimized GCP instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on custom memory-optimized GCP instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on custom memory-optimized GCP instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom CPU-optimized GCP instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom memory-optimized GCP instances.
Microsoft Azure
  • Elasticsearch:

    • Data nodes: Start at 8 GB memory x 2 availability zones. Hosted on Azure D64sv3 instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on Azure E32sv3 instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on Azure E32sv3 instances.
  • Machine learning (ML): Disabled by default. The functionality is available in the template, but you must explicitly enable it in the UI. Hosted on Azure D64sv3 instances.
  • APM (application performance monitoring): The functionality is available in the template and is enabled by default (free tier 0.5GB). Hosted on Azure E32sv3 instances.

Memory optimized profileedit

A profile to perform memory-intensive operations efficiently, including workloads with frequent aggregations.

Included with this profile:

Amazon Web Services (AWS)
  • Elasticsearch:

    • Data nodes: Start at 8 GB memory x 2 availability zones. Hosted on AWS r4 memory-optimized instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on AWS r4 memory-optimized instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on AWS r4 memory-optimized instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on AWS m5 memory-optimized instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on AWS r4 memory-optimized instances.
Google Cloud Platform (GCP)
  • Elasticsearch:

    • Data nodes: Start at 8 GB memory x 2 availability zones. Hosted on custom memory-optimized GCP instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on custom memory-optimized GCP instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on custom memory-optimized GCP instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom CPU-optimized GCP instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom memory-optimized GCP instances.
Microsoft Azure
  • Elasticsearch:

    • Data nodes: Start at 8 GB memory x 2 availability zones. Hosted on Azure E32sv3 memory-optimized instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on Azure E32sv3 memory-optimized instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on Azure E32sv3 memory-optimized instances.
  • Machine learning (ML): Disabled by default. The functionality is available in the template, but you must explicitly enable it in the UI. Hosted on Azure D64sv3 memory-optimized instances.
  • APM (application performance monitoring): The functionality is available in the template and is enabled by default (free tier 0.5GB). Hosted on Azure E32sv3 memory-optimized instances.

Cross-cluster search profileedit

This profile manages remote connections for running Elasticsearch queries across multiple deployments and indices. These federated searches make it possible to break up large deployments into smaller, more resilient Elasticsearch clusters. You can organize deployments by departments or projects for example, but still have the ability to aggregate query results and get visibility into your Elasticsearch Service infrastructure. You can add remote connections either when you create your deployment or when you customize it. To know more about cross-cluster search, see Enable cross-cluster search.

Included in this profile:

Amazon Web Services (AWS)
  • Elasticsearch cross-cluster search nodes: Start at 1 GB memory x 1 availability zone. Hosted on AWS r4 instances.
  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on AWS r4 instances.
Google Cloud Platform (GCP)
Microsoft Azure

Hot-warm architecture profileedit

A profile that you typically use for time-series analytics and log aggregation workloads that benefit from tiered-storage automatic index curation. Includes features to manage resources efficiently when you need greater capacity, such as:

  • A tiered architecture with two different types of data nodes, hot and warm.
  • Time-based indices, with automatic index curation to move indices from hot to warm nodes over time by changing their shard allocation.

The two type of data nodes in a hot-warm architecture each have their own characteristics:

Hot data node
Handles all indexing of new data in the cluster and holds the most recent daily indices that tend to be queried most frequently. Indexing is an I/O intensive activity and the hardware these nodes run on needs to be more powerful and use SSD storage.
Warm data node
Handles a large amount of read-only indices that are not queried frequently. With read-only indices, warm nodes can use very large spindle drives instead of SSD storage. Reducing the overall cost of retaining data over time yet making it accessible for queries.

Index curationedit

One of the key features of a hot-warm architecture, time-based index curation automates the task of moving data from hot to warm nodes as it ages. When you deploy a hot-warm architecture, Elasticsearch Service performs regular index curation according to these rules:

  • Index curation moves indices from one Elasticsearch node to another by changing their shard allocation, always from hot to warm.
  • Index curation is always time-based and takes place when an index reaches the age specified, in days, weeks, or months.
  • Index curation always targets indexes according to one or more matching patterns. If an index matches a pattern, Elasticsearch Service moves it from a hot to a warm node.

While you create your deployment, you can define which indices get curated and when. To know more about index curation, see Configure index management

To know more about how hot-warm architectures work with Elasticsearch, see “Hot-Warm” Architecture in Elasticsearch 5.x.

In this profileedit

The following features are included with this profile:

Amazon Web Services (AWS)
  • Elasticsearch:

    • Data nodes - hot: Starts at 4 GB memory x 2 availability zones. Hosted on AWS i3 instances.
    • Data nodes - warm: Starts at 4 GB memory x 2 availability zones. Data nodes must be at least 4 GB in size. Hosted on AWS d2 instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on AWS r4 instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on AWS r4 instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on AWS m5 instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on AWS r4 instances.
Google Cloud Platform (GCP)
  • Elasticsearch:

    • Data nodes - hot: Starts at 4 GB memory x 2 availability zones. Hosted on custom I/O-optimized GCP instances.
    • Data nodes - warm: Starts at 4 GB memory x 2 availability zones. Data nodes must be at least 4 GB in size. Hosted on storage optmized custom GCP instances.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on custom memory-optimized GCP instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on custom memory-optimized GCP instances.
  • Machine learning (ML): Disabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom CPU-optimized GCP instances.
  • APM (application performance monitoring): Enabled by default. The functionality is pre-wired into the template, but you must explicitly enable it in the UI. Hosted on custom memory-optimized GCP instances
Microsoft Azure
  • Elasticsearch:

    • Data nodes - hot: Starts at 4 GB memory x 2 availability zones. Hosted on Azure l32sv2 instances.
    • Data nodes - warm: Starts at 4 GB memory x 2 availability zones. Data nodes must be at least 4 GB in size. Hosted on Azure e16sv3 instances with extra persistent storage.
    • Master nodes:

      Additional master-eligible node added when choosing 2 availability zones (to create a quorum of 3).

      When 1 AZ or 3 AZ are selected, the data nodes act as master-eligible node and there is no requirement for an additional master-eligible node.

      Configurations beyond 5 nodes per AZ can also spin up a dedicated master-eligible set of nodes (in 3 AZs always) to offload the data nodes. Hosted on Azure E32sv3 instances.

  • Kibana: Starts at 1 GB memory x 1 availability zone. Hosted on Azure E32sv3 instances.
  • Machine learning (ML): Disabled by default. The functionality is available in the template, but you must explicitly enable it in the UI. Hosted on Azure D64sv3 instances.
  • APM (application performance monitoring): The functionality is available in the template and is enabled by default (free tier 0.5GB). Hosted on Azure E32sv3 instances