This documentation refers to the standalone (legacy) method of running APM Server. This method of running APM Server will be deprecated and removed in a future release. Please consider upgrading to the Elastic APM integration.
Tune APM Server output parameters for your Elasticsearch clusteredit
If your Elasticsearch cluster is not ingesting the amount of data you expect, you can tweak a few APM Server settings:
output.elasticsearch.worker. See tune for indexing speed for an overview.
output.elasticsearch.bulk_max_sizeis set to a high value, for example 5120. The default of 50 is very conservative.
queue.mem.eventsis set to a reasonable value compared to your other settings. A good rule of thumb is that
The output configuration section shows more details.
Adjust internal queue sizeedit
APM Server uses an internal queue to buffer incoming events.
A larger queue can retain more data if Elasticsearch is unavailable for longer periods,
and it alleviates problems that might result from sudden spikes of traffic.
You can adjust the queue size by overriding
queue.mem.events can significantly affect APM Server memory usage.
Add APM Server instancesedit
If the APM Server cannot process data quickly enough, you will see request timeouts.
One way to solve this problem is to increase processing power. This can be done by either migrating your APM Server to a more powerful machine or adding more APM Server instances. Having several instances will also increase availability.
Reduce the payload sizeedit
Large payloads may result in request timeouts. You can reduce the payload size by decreasing the flush interval in the agents. This will cause agents to send smaller and more frequent requests.
Optionally you can also reduce the sample rate or reduce the amount of stacktraces.
Read more in the agents documentation.
Adjust anonymous auth rate limitedit
Agents make use of long running requests and flush as many events over a single request as possible. Thus, the rate limiter for anonymous authentication is bound to the number of events sent per second, per IP.
If the event rate limit is hit while events on an established request are sent, the request is not immediately terminated. The intake of events is only throttled to
rate_limit.event_limit, which means that events are queued and processed slower. Only when the allowed buffer queue is also full, does the request get terminated with a
429 - rate limit exceeded HTTP response. If an agent tries to establish a new request, but the rate limit is already hit, a
429 will be sent immediately.
rate_limit.event_limit default value will help avoid
rate limit exceeded errors.
Intro to Kibana
ELK for Logs & Metrics