What are deployment templates?

A banner showing a several planes flying in formation

Deployment templates pre-configure the components of the Elastic Stack, such as Elasticsearch nodes and Kibana instances, for different use cases. Compared to a one-size-fits-all approach to deploying the Elastic Stack, templates provide much greater flexibility and they ensure that your deployment has the resources it needs to support your use case. Templates are also flexible: Not only can you select a template that fits your use case, but you can customize each component of the Elastic Stack with a just few additional clicks.

The components of the Elastic Stack that we support as part of a deployment are called instances and include:

  • Elasticsearch data, ingest, and master nodes
  • Kibana instances
  • Machine learning (ML) nodes
  • Application Performance Monitoring (APM) Server instances

To address each use case, deployment templates combine these components of the Elastic Stack in different ways according to tried-and-true best practices that you can trust. For example: In a hot-warm architecture, which typically solves a log aggregation use case, you need at least one Elasticsearch hot node with recent data and one warm node with read-only indices for older, less frequently queried data. A real hot-warm architecture in a production environment also needs to be fault tolerant, so that it is highly available. To support these requirements, our hot-warm architecture template includes hot and warm modes spread across two availability zones at a minimum, comes with Kibana enabled and ready to use, and even pre-wires machine learning in case you want to enable machine learning for anomaly detection later on.

When you create your deployment on Elasticsearch Service, the virtualized hardware that hosts your deployment is optimized for your specific use case. This means that the running instances of the Elastic Stack get assigned resources tailored to a workload or cluster architecture, including:

  • CPU (compute)
  • Memory
  • Storage
  • I/O

The size of an instance in a deployment is measured in GB of memory or storage, as indicated in the UI. The instance size has another important effect: Changing the memory or storage size also changes other resources in lockstep, relative to the size of the instance. This means that, if you double the memory size of an instance in the high I/O template from 16 GB to 32 GB of memory, you also double the CPU resources and storage space, for example. Or put simply, to get more performance, increase the size of an instance.

Under the covers, our infrastructure consists of virtualized hardware resources from a cloud provider, such as Amazon EC2 or Google Compute Platform. You don’t interact with the cloud platform infrastructure layer directly on Elasticsearch Service, but we do document what we use. To learn more, see Elasticsearch Service Hardware.