JVM memory pressure indicator

In addition to the more detailed cluster performance metrics, the Elasticsearch Add-On for Heroku console also includes a JVM memory pressure indicator for each node in your cluster. This indicator can help you to determine when you need to upgrade to a larger cluster.

The percentage number used in the JVM memory pressure indicator is actually the fill rate of the old generation pool. For a detailed explanation of why this metric is used, see Understanding Memory Pressure.

Memory pressure indicator

JVM memory pressure levels

Below 75%, the JVM memory pressure indicator is grey and the garbage collector is idling. There is likely some garbage among the allocated objects, but there is no way to be certain until the pressure level reaches 75% and garbage collection starts.

When the JVM memory pressure reaches 75%, the indicator turns red. At this level, garbage collection starts and gradually becomes more frequent as the memory usage increases, potentially impacting the performance of your cluster. As long as the cluster performance suits your needs, JVM memory pressure above 75% is not a problem in itself, but there is not much spare memory capacity. Review the common causes of high JVM memory usage to determine your best course of action.

When the JVM memory pressure indicator rises above 85%, the instance is now not only close to running out of memory, but also the likelihood of long garbage collection pauses is increased. This situation can reduce the stability of your cluster and the integrity of your data. Unless you expect the load to drop soon, we recommend that you upgrade to a larger cluster. Even if you’re planning to optimize your memory usage, it is best to upgrade the cluster first. Upgrading the cluster can give you more time to apply other changes, and also provides the cluster with more resource for when those changes are applied.

Common causes of high JVM memory usage

The two most common reasons for a high JVM memory pressure reading are:

1. Having too many shards per node

If JVM memory pressure above 75% is a frequent occurrence, the cause is often having too many shards per node relative to the amount of available memory. We recommend having fewer than 20 shards per GB. You can lower the JVM memory pressure by reducing the number of shards or upgrading to a larger cluster. For guidelines, see the article about avoiding oversharding.

2. Running expensive queries

If JVM memory pressure above 75% happens only occasionally, this is often due to expensive queries. Queries that have a very large request size, that involve aggregations with a large volume of buckets, or that involve sorting on a non-optimized field, can all cause temporary spikes in JVM memory usage. To resolve this problem, consider optimizing your queries or upgrading to a larger cluster.

To learn more about monitoring your cluster, see Keeping Your Cluster Healthy.