In ECE, every host is a runner. Depending on the size of your platform, runners can have one or more roles: Coordinator, director, proxy, and allocator. While planning the capacity of your ECE installation, you have to properly size the capacity for all roles. However, the allocator role deserves particular attention, as it hosts the Elasticsearch, Kibana, APM, Enterprise Search nodes, and the relevant services.

This section focuses on the allocator role, and explains how to plan its capacity in terms of memory, CPU, processors setting, and storage.

### Memoryedit

You should plan your deployment size based on the amount of data you ingest. Memory is the main scaling unit for a deployment. Other units, like CPU and disks, are proportional to the memory size. The memory available for an allocator is called capacity.

During installation, the allocator capacity defaults to 85% of the host physical memory, as the rest is reserved for ECE system services.

To adjust the allocator capacity, reinstall ECE on the host with a new value assigned to the --capacity parameter. If you cannot reinstall on the host, after more physical memory has been added to the server, use the ECE API:

curl -X PUT \
-H “Authorization: ApiKey \$ECE_API_KEY” \
-H 'Content-Type: application/json' \
-d '{"capacity":<Capacity_Value_in_Mb>}'

For more information on how to use API keys for authentication, see the section Access the API from the Command Line.

Regardless of the use of this API, the CPU quota uses the memory specified at installation time.

#### Examplesedit

Here are some examples to make Elastic deployments and ECE system services run smoothly on your host:

• If the runner has more than one role (allocator, coordinator, director, or proxy), you should reserve 28GB of host memory. For example, on a host with 256GB of RAM, 228GB is suitable for deployment use.
• If the runner has only the Allocator role, you should reserve 12GB of host memory. For example, on a host with 256GB of RAM, 244GB is suitable for deployment use.

### CPU quotasedit

ECE uses CPU quotas to assign shares of the allocator host to the instances that are running on it. To calculate the CPU quota, use the following formula:

CPU quota = DeploymentRAM / HostCapacity * factor

By default, the overcommit factor is 1.2. Giving more CPU resources to one deployment can result in other deployments getting a smaller share than expected, if the allocator is at 100% CPU usage. For example, this might happen when you run a single production deployment and all other deployments are for testing only.

Smaller instances and dedicated masters are given an additional boost as if they were larger, but this is outside the scope of this document.

To change the CPU factor, log into the Cloud UI and proceed as follows:

2. Scroll to the bottom of the page and click Advanced Edit.
3. In the Elasticsearch cluster data block, override the default value:

"resources": {
"cpu": {
"factor": 1.4
}
}

#### Examplesedit

Consider a 32GB deployment hosted on a 128GB allocator.

If you use the default system service reservation, the CPU quota is 35,3%:

CPU quota = 32 / (128 * 0,85) * 1.2 = 35,3%

If you use 12GB Allocator system service reservation, the CPU quota is 33,1%:

CPU quota = 32 / (128 -12) * 1.2 = 33,1%

Those percentages represent the upper limit of the % of the total CPU resources available in a given 100ms period.

### Processors settingedit

In addition to the CPU quotas, the processors setting also plays a relevant role.

The allocated processors setting originates from Elasticsearch and is responsible for calculating your thread pools. While the CPU quota defines the percentage of the total CPU resources of an allocator that are assigned to an instance, the allocated processors define how the thread pools are calculated in Elasticsearch, and therefore how many concurrent search and indexing requests an instance can process. In other words, the CPU ratio defines how fast a single task can be completed, while the processors setting defines how many different tasks can be completed at the same time.

Starting from Elasticsearch version 7.9.2, running on ECE 2.7.0 or newer, we rely on Elasticsearch and the -XX:ActiveProcessorCount JVM setting to automatically detect the allocated processors.

In earlier versions of ECE and Elasticsearch, the Elasticsearch processors setting was used to configure the allocated processors according to the following formula:

Math.min(16,Math.max(2,(16*instanceCapacity*1.0/1024/64).toInt))

The following table gives an overview of the allocated processors that are used to calculate the Elasticsearch thread pools based on the formula above:

instance size vCPU

1024

2

2048

2

4096

2

8192

2

16384

4

32768

8

65536 16

This table also provides a rough indication of what the auto-detected value could be on newer versions of ECE and Elasticsearch.

### Storageedit

ECE has specific hardware prerequisites for storage. Disk space is consumed by system logs, container overhead, and deployment data.

The main factor for selecting a disk quota is the deployment data, that is, data from your Elasticsearch, Kibana, and APM nodes. The biggest portion of data is consumed by the Elasticsearch nodes.

ECE uses XFS to enforce specific disk space quotas to control the disk consumption for the deployment nodes running on your allocator.

To calculate the disk quota, use the following formula:

Diskquota = ICmultiplier * Deployment RAM

ICmultiplier is the disk multiplier of the instance configuration that you defined in your ECE environment.

The default multiplier for data.default is 32, which is used for hot nodes. The default multiplier for data.highstorage is 64, which is used for warm and cold nodes. The FS multiplier for data.frozen is 80, which is used for frozen nodes.

You can change the value of the disk multiplier at different levels:

• At the ECE level, see Edit instance configurations.
• At the instance level, log into the Cloud UI and proceed as follows:

1. From your deployment overview page, find the instance you want and open the instance menu.
2. Select Override disk quota.