Once you have signed in on Elastic Cloud Console, three main categories of settings are available:
- Cluster settings
- Plugin / Bundle settings (Gold and Platinum subscriptions only)
- Account settings
To change cluster settings:
- Sign in to the Elastic Cloud Console.
- Click Configuration for an existing cluster in the sidebar or click Create New Cluster.
Let the user interface guide you through the cluster configuration for your cluster.
If you are changing an existing cluster, you can make multiple changes with a single configuration update, such as changing the capacity and upgrading to a new Elasticsearch version in one step.
- Save your changes. The new configuration takes a few moments to create.
When choosing a region the general rule is to choose one as close to your application servers as possible in order to minimize the network delay.
You can select your region only when you create a new cluster, so pick one that works for you. Regions cannot be changed later.
Depending upon how much data you have and what queries you plan to run, you need to select a cluster size that fits your needs. Unfortunately, there is no silver bullet for deciding how much memory you will need other than just simply testing it. Fortunately, you can change the capacity of the cluster later, without any downtime. More details on how much memory is needed, see the blog article, Elasticsearch in Production. You should also refer to the memory pressure indicator in the Elastic Cloud Console to help you decide when it is time to scale up your cluster.
Currently, half the memory is assigned to the JVM heap. For example, on a 32 GB cluster, 16 GB are allotted to heap. The disk-to-RAM ratio currently is 1:24, meaning that you get 24 GB of storage space for each 1 GB of RAM. All clusters are backed by SSD drives.
The CPU resources assigned to a cluster are relative to the size of your cluster, meaning that a 32GB cluster gets twice as much CPU resources as a 16GB cluster. All clusters are guaranteed their share of CPU resources, as we do not overcommit resources. Our smaller clusters do benefit from temporary CPU boosting to improve performance when needed most.
The chosen capacity, as described above, is per data center. The reason for this is that there is no point in having two data centers if one failing will result in a cascading error because the remaining center cannot handle the total load. You may choose between one, two, or three data centers that all no shared single point of failure beyond residing within the same city. Through the allocation awareness in Elasticsearch, we configure the nodes so that your cluster will automatically allocate replicas between each data center.
After selecting region, capacity, and number of data centers, you get the price per hour and accumulated per month. The monthly price is only for convenience, as billing is done by the hour. This implies that you can test a bigger or smaller capacity and only pay for the hours used.
Elasticsearch versions are denoted as
X is the major version,
Y is the minor version, and
Z is the patch level or maintenance release. At any given time, the two latest minor versions are guaranteed to be available for deployments of new clusters, such as Elasticsearch 2.4 and 2.3.5. Only the latest patch level within a minor version is usually available for new deployments, such as 2.4.1.
You might sometimes see additional versions listed in the user interface beyond what we guarantee to be available, such as release candidate builds. If versions are listed, they can be deployed.
In order to be able to deliver new features and keep complexity manageable, we also need to be able to discontinue old versions. When a version nears its end-of-life point, you are typically given six months notice in advance to upgrade to one of the available minor versions.
To learn more about how we support Elasticsearch versions in Elastic Cloud, see Version Policy.
You can upgrade Elasticsearch versions for an existing cluster on the Configuration page by selecting a newer version. To learn more about upgrading versions of Elasticsearch and best practices for major version upgrades, see Version Upgrades.
The script_fields in filters and facets are one of the features that make Elasticsearch so flexible, but they can allow arbitrary code execution, like Runtime.exec("cat /etc/passwd") and other malicious things. We provide you with three levels of scripting control for each of the script types that are supported: You can disable the scripts completely, enable scripts to run in a sandbox or enable all scripts. Especially for inline scripts, we recommend that you do not enable all inline scripts, because of the security risk that they can pose.
If you have uploaded any plugins or user bundles with dictionaries or scripts then this where you choose to enable them for the cluster.
Only Gold and Platinum subscriptions have access to uploading custom plugins. All subscription levels, including Standard, can upload scripts and dictionaries.
This section has the list of official plugins available for the selected Elasticsearch version. When selecting a plugin from this list you get a version that has been tested with the chosen Elasticsearch version. The main difference between selecting a plugin from this list versus uploading the same plugin as a custom plugin is in who decides the version used. The reason we do not list the version chosen on this page is because we reserve the option to change it when necessary. That said, we will not force a cluster restart for a simple plugin upgrade unless there are severe issues with the current version. In most cases, plugin upgrades are applied lazily, in other words when something else forces a restart like you changing the plan or Elasticsearch runs out of memory.
By default, Kibana is disabled. To enable Kibana, simply click Enable.
Enabling Kibana provides you with an endpoint URL, where you can access Kibana. It can take a short while to provision Kibana right after you click Enable, so if you get an error message when you first click the endpoint URL, try again.
For version 5.0 and later, you can log into Kibana with the
elastic superuser to try it out. The password was provided when you created your cluster or can be reset. For versions before 5.0 and if Shield is enabled, you can log into Kibana with the
admin user to try it out. The password was provided when you enabled Shield or can be reset. In production systems, you might need to control what Elasticsearch data users can access through Kibana, so you need create credentials that can be used to access the necessary Elasticsearch resources. This means granting read access to the necessary indexes, as well as access to update the
User settings are appended to the
elasticsearch.yml configuration file for your cluster and provide provide custom configuration options. Currently we support the following configuration options:
cluster.indices.close.enable- enables closing indices in Elasticsearch version 2.2 and later. We strongly recommend leaving this set to
false(the default). Closed indices are a data loss risk: If you close an index, it is not included in snapshots and you will not be able to restore the data. Similarly, closed indices are not included when you scale to a different cluster size or during failover operations.
You might enable this setting temporarily in order to change the analyzer configuration for an existing index.
In Elasticsearch versions before 2.2, you can always close an index, but we recommend against it, as you risk the same data loss.
Closed indices are a data loss risk. Enable this setting only temporarily.
Custom alerting (formerly Watcher) configuration for sending messages to Slack, HipChat, and PagerDuty.
The Slack and Hipchat configuration syntax changed in version 5.x.
xpack.notification.email.html.sanitization.*- monitoring email sanitization settings.
repositories.url.allowed_urls- allows whitelisting of readonly url repository.
script.painless.regex.enabled- enables the regular expressions support in the painless scripting language
reindex.remote.whitelist- list of hosts that are allowed to be reindexed from. Expects a list of comma delimited list
host:portcombinations. See reindex-from-remote. Defaults to [".io:", ".com:"].
All the other options will be rejected. We plan to add more configuration options over time.
This setting allows you to assign a more human-friendly name to your cluster which will be used for future reference in the Elastic Cloud Console. Common choices are dev, prod, test, or something more domain specific.