When you create or edit an existing deployment, you can fine-tune the capacity, add extensions, and select additional features.
Autoscaling reduces some of the manual effort required to manage a deployment by adjusting the capacity as demands on the deployment change. Currently, autoscaling is supported to scale Elasticsearch data tiers upwards, and to scale machine learning nodes both upwards and downwards. Check Deployment autoscaling to learn more.
Depending upon how much data you have and what queries you plan to run, you need to select a cluster size that fits your needs. There is no silver bullet for deciding how much memory you need other than simply testing it. The cluster performance metrics in the Elasticsearch Add-On for Heroku console can tell you if your cluster is sized appropriately. You can also enable deployment monitoring for more detailed performance metrics. Fortunately, you can change the amount of memory allocated to the cluster later without any downtime for HA deployments.
To change a cluster’s topology, from deployment management, select Edit deployment from the Actions dropdown. Next, select a storage and RAM setting from the Size per zone drop-down list, and save your changes. When downsizing the cluster, make sure to have enough resources to handle the current load, otherwise your cluster will be under stress.
Currently, half the memory is assigned to the JVM heap (a bit less when monitoring is activated). For example, on a 32 GB cluster, 16 GB are allotted to heap. The disk-to-RAM ratio currently is 1:24, meaning that you get 24 GB of storage space for each 1 GB of RAM. All clusters are backed by SSD drives.
For production systems, we recommend not using less than 4 GB of RAM for your cluster, which assigns 2 GB to the JVM heap.
The CPU resources assigned to a cluster are relative to the size of your cluster, meaning that a 32 GB cluster gets twice as much CPU resources as a 16 GB cluster. All clusters are guaranteed their share of CPU resources, as we do not overcommit resources. Smaller clusters up to and including 8 GB of RAM benefit from temporary CPU boosting to improve performance when needed most.
High availability is achieved by running a cluster with replicas in multiple data centers (availability zones), to prevent against downtime when infrastructure problems occur or when resizing or upgrading deployments. We offer the options of running in one, two, or three data centers.
Running in two data centers or availability zones is our default high availability configuration. It provides reasonably high protection against infrastructure failures and intermittent network problems. You might want three data centers if you need even higher fault tolerance. Just one zone might be sufficient, if the cluster is mainly used for testing or development.
Some regions might have only two availability zones.
Like many other changes, you change the level of fault tolerance while the cluster is running. For example, when you prepare a new cluster for production use, you can first run it in a single data center and then add another data center right before deploying to production.
While multiple data centers or availability zones increase a cluster’s fault tolerance, they do not protect against problematic searches that cause nodes to run out of memory. For a cluster to be highly reliable and available, it is also important to have enough memory.
The node capacity you choose is per data center. The reason for this is that there is no point in having two data centers if the failure of one will result in a cascading error because the remaining data center cannot handle the total load. Through the allocation awareness in Elasticsearch, we configure the nodes so that your Elasticsearch cluster will automatically allocate replicas between each availability zone.
You can get an at-a-glance status of all the shards in the deployment on the Elasticsearch page.
We recommend that you read Size your shards before you change the number of shards.
Here, you can configure user settings, extensions, and system settings (older versions only).
Set specific configuration parameters to change how Elasticsearch and other Elastic products run. User settings are appended to the appropriate YAML configuration file, but not all settings are supported in Elasticsearch Add-On for Heroku.
For more information, refer to Edit your user settings.
Lists the official plugins available for your selected Elasticsearch version, as well as any custom plugins and user bundles with dictionaries or scripts.
The reason we do not list the version chosen on this page is because we reserve the option to change it when necessary. That said, we will not force a cluster restart for a simple plugin upgrade unless there are severe issues with the current version. In most cases, plugin upgrades are applied lazily, in other words when something else forces a restart like you changing the plan or Elasticsearch runs out of memory.
Only Gold and Platinum subscriptions have access to uploading custom plugins. All subscription levels, including Standard, can upload scripts and dictionaries.
For versions 5.x and older, you can configure several script settings. The defaults for the different supported script types are generally safe to accept as is, unless you have a specific requirement.
For each supported script type, you have three levels of scripting control:
- Disable the scripts completely
- Enable scripts to run in a sandbox
- Enable all scripts
A Kibana instance is created automatically as part of every deployment.
If you use a version before 5.0 or if your deployment didn’t include a Kibana instance initially, there might not be a Kibana endpoint URL shown, yet. To enable Kibana, select Enable. Enabling Kibana provides you with an endpoint URL, where you can access Kibana. It can take a short while to provision Kibana right after you select Enable, so if you get an error message when you first access the endpoint URL, try again.
Selecting Open will log you in to Kibana using single sign-on (SSO). For versions older than 7.9.2, you need to log in to Kibana with the
elastic superuser. The password was provided when you created your deployment or can be reset.
In production systems, you might need to control what Elasticsearch data users can access through Kibana. Refer to Securing your deployment to learn more.
Integrations Server connects observability and security data from Elastic Agents and APM to Elasticsearch. An Integrations Server instance is created automatically as part of every deployment.
Enterprise Search enables you to add modern search to your application or connect and unify content across your workplace. An Enterprise Search instance is created automatically as part of every deployment.
Here, you can configure features that keep your deployment secure: reset the password for the
elastic user, set up traffic filters, and add settings to the Elasticsearch keystore. You can also set up remote connections to other deployments.
In maintenance mode, requests to your cluster are blocked during configuration changes. You use maintenance mode to perform corrective actions that might otherwise be difficult to complete. Maintenance mode lasts for the duration of a configuration change and is turned off after the change completes.
We strongly recommend that you use maintenance mode when your cluster is overwhelmed by requests and you need to increase capacity. If your cluster is being overwhelmed because it is undersized for its workload, nodes might not respond to efforts to resize. Putting the cluster into maintenance mode as part of the configuration change can stop the cluster from becoming completely unresponsive during the configuration change, so that you can resolve the capacity issue. Without this option, configuration changes for clusters that are overwhelmed can take longer and are more likely to fail.
There are a few actions you can perform from the Actions dropdown:
- Restart Elasticsearch - Needed only rarely, but full cluster restarts can help with a suspected operational issue before reaching out to Elastic for help.
- Delete your deployment - For deployment that you no longer need and don’t want to be charged for any longer. Deleting a deployment removes the Elasticsearch cluster and all your data permanently.
Use these actions with care. Deployments are not available while they restart and deleting a deployment does really remove the Elasticsearch cluster and all your data permanently.