The following deployment settings are available:
Selects a cloud platform and a region where your Elasticsearch clusters and Kibana instances will be hosted. Elasticsearch Add-On for Heroku currently supports Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure.
Regions represent data centers in a geographic location, where your deployment will be located. When choosing a region, the general rule is to choose one as close to your application servers as possible in order to minimize network delays.
You can select your cloud platform and region only when you create a new deployment, so pick ones that works for you. They cannot be changed later. Different deployments can use different platforms and regions.
Depending upon how much data you have and what queries you plan to run, you need to select a cluster size that fits your needs. There is no silver bullet for deciding how much memory you need other than simply testing it. The cluster performance metrics in the Elasticsearch Add-On for Heroku console can tell you if your cluster is sized appropriately. Fortunately, you can change the capacity of the cluster later, without any downtime.
For trials, larger sizes are not available until you add a credit card.
Currently, half the memory is assigned to the JVM heap. For example, on a 32 GB cluster, 16 GB are allotted to heap. The disk-to-RAM ratio currently is 1:24, meaning that you get 24 GB of storage space for each 1 GB of RAM. All clusters are backed by SSD drives.
For production systems, we recommend not using less than 4 GB of RAM for your cluster, which assigns 2 GB to the JVM heap.
The CPU resources assigned to a cluster are relative to the size of your cluster, meaning that a 32 GB cluster gets twice as much CPU resources as a 16 GB cluster. All clusters are guaranteed their share of CPU resources, as we do not overcommit resources. Smaller clusters up to and including 8 GB of RAM benefit from temporary CPU boosting to improve performance when needed most.
To learn more about how much memory might be needed, see Elasticsearch in Production.
High availability is achieved by running a cluster with replicas in multiple data centers (availability zones), to prevent against downtime when infrastructure problems occur. We offer the options of running in one, two, or three data centers.
Running in two data centers or availability zones is our default high availability configuration. It provides reasonably high protection against infrastructure failures and intermittent network problems. You might want three data centers if you need even higher fault tolerance. Just one zone might be sufficient, if the cluster is mainly used for testing or development.
Some regions might have only two availability zones.
Like many other changes, you change the level of fault tolerance while the cluster is running. For example, when you prepare a new cluster for production use, you can first run it in a single data center and then add another data center right before deploying to production.
While multiple data centers or availability zones increase a cluster’s fault tolerance, they do not protect against problematic searches that cause nodes to run out of memory, for example. For a cluster to be highly reliable and available, it is also important to have enough memory.
The node capacity you chose is per data center. The reason for this is that there is no point in having two data centers if the failure of one will result in a cascading error because the remaining data center cannot handle the total load. Through the allocation awareness in Elasticsearch, we configure the nodes so that your Elasticsearch cluster will automatically allocate replicas between each availability zone
Our article on Elasticsearch in Production covers availability zones and resilience against infrastructure failures in more detail.
Elasticsearch versions are denoted as
X is the major version,
Y is the minor version, and
Z is the patch level or maintenance release. The default version is the latest, stable version. At any given time, the two latest minor versions are guaranteed to be available for deployments of new clusters, such as Elasticsearch 5.5.1 and 5.4.3. Only the latest patch level within a minor version is usually available for new deployments, such as 2.4.5.
You might sometimes see additional versions listed in the user interface beyond what we guarantee to be available, such as release candidate builds. If versions are listed, they can be deployed.
In order to be able to deliver new features and keep complexity manageable, we also need to be able to discontinue old versions. When a version nears its end-of-life point, you are typically given six months notice in advance to upgrade to one of the available minor versions.
To learn more about how we support Elasticsearch versions in Elasticsearch Add-On for Heroku, see Version Policy.
You can always upgrade Elasticsearch versions without downtime, but you cannot downgrade. To learn more about upgrading versions of Elasticsearch and best practices for major version upgrades, see Version Upgrades.
The defaults for the different supported script types are generally safe to accept as is, unless you have a specific requirement. The script_fields in filters and facets are one of the features that make Elasticsearch so flexible, but they can allow arbitrary code execution, like
Runtime.exec("cat /etc/passwd") and other malicious operations.
We provide you with three levels of scripting control for each of the script types that are supported: You can disable the scripts completely, enable scripts to run in a sandbox or enable all scripts. Especially for inline scripts, we strongly recommend that you do not enable all inline scripts, because of the security risk that they can pose.
You can review your Elasticsearch shard activity from Elasticsearch Add-On for Heroku. At the bottom of the Elasticsearch page, you can hover over each part of the shard visualization for specific numbers.
For versions before 5.0: Select the number of shards. The default is 1. We recommend that you read Sizing Elasticsearch before your change the number of shards.
Determines whether an index is created automatically if you attempt to index a document into an index that does not exist.
Determines whether destructive actions like deleting an index require explicit index names or whether wildcards are allowed.
If you have uploaded any plugins or user bundles with dictionaries or scripts then this where you choose to enable them for the cluster.
Only Gold and Platinum subscriptions have access to uploading custom plugins. All subscription levels, including Standard, can upload scripts and dictionaries.
Lists the official plugins available for your selected Elasticsearch version.
When selecting a plugin from this list you get a version that has been tested with the chosen Elasticsearch version. The main difference between selecting a plugin from this list and uploading the same plugin as a custom plugin is in who decides the version used. See also Add plugins.
The reason we do not list the version chosen on this page is because we reserve the option to change it when necessary. That said, we will not force a cluster restart for a simple plugin upgrade unless there are severe issues with the current version. In most cases, plugin upgrades are applied lazily, in other words when something else forces a restart like you changing the plan or Elasticsearch runs out of memory.
For new deployments that use Elasticsearch version 5.0 and later, we automatically create a Kibana instance for you. If you use a version before 5.0 or if your cluster didn’t include a Kibana instance initially, there might not be a Kibana endpoint URL shown, yet. To enable Kibana, simply click Enable.
Enabling Kibana provides you with an endpoint URL, where you can access Kibana. It can take a short while to provision Kibana right after you click Enable, so if you get an error message when you first click the endpoint URL, try again.
For version 5.0 and later: Log in to Kibana with the
elastic superuser to try it out. The password was provided when you created your cluster or can be reset.
For versions before 5.0: If Shield is enabled, you can log in to Kibana with the
admin user to try it out. The password was provided when you enabled Shield or can be reset.
In production systems, you might need to control what Elasticsearch data users can access through Kibana, so you need create credentials that can be used to access the necessary Elasticsearch resources. This means granting read access to the necessary indexes, as well as access to update the
For deployments that are version 6.3 and later, you have the option to add an Application Performance Monitoring (APM) Server to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the Elasticsearch cluster. The APM data is automatically available in Kibana for searching and visualizing. For more information regarding Elastic APM, see www.elastic.co/solutions/apm.
As part of provisioning, the APM Server is already configured to work with Elasticsearch and Kibana. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the APM Agents. The APM Agents get deployed within your services and applications.
You can reset the secret token; however, it will disrupt your APM service, restart the server, and all of your agents will need to be updated with the new token.
When you are using Kibana and have configured an agent, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back.
Use your own user settings to change how Elasticsearch and other Elastic products run. User settings are appended to the appropriate YAML configuration file, but not all settings are supported. See also Editing Your User Settings.
In maintenance mode, requests to your cluster are blocked during configuration changes. You use maintenance mode to perform corrective actions that might otherwise be difficult to complete. Maintenance mode lasts for the duration of a configuration change and is turned off after the change completes.
We strongly recommend that you use maintenance mode when your cluster is overwhelmed by requests and you need to increase capacity. If your cluster is being overwhelmed because it is undersized for its workload, nodes might not respond to efforts to resize. Putting the cluster into maintenance mode as part of the configuration change can stop the cluster from becoming completely unresponsive during the configuration change, so that you can resolve the capacity issue. Without this option, configuration changes for clusters that are overwhelmed can take longer and are more likely to fail.
There are two actions you can perform in the deployment management:
- Perform a cluster restart - Needed only rarely, but full cluster restarts can help with a suspected operational issue before reaching out to Elastic for help.
- Delete your cluster - For clusters that you no longer need and don’t want to be charged for any longer. Deleting a cluster removes the cluster and all your data permanently.
Use the actions in the deployment management with care. Clusters are not available while they restart and deleting a cluster does really remove the cluster and all your data permanently.
This setting allows you to assign a more human-friendly name to your cluster which will be used for future reference in the Elasticsearch Add-On for Heroku console. Common choices are dev, prod, test, or something more domain specific.