Improving the Elastic APM UI performance with continuous rollups and service metrics

elastic-blog-header-ui.png

In today's fast-paced digital landscape, the ability to monitor and optimize application performance is crucial for organizations striving to deliver exceptional user experiences. At Elastic, we recognize the significance of providing our user base with a reliable observability platform that scales with you as you’re onboarding thousands of services that produce terabytes of data each day. We have been diligently working behind the scenes to enhance our solution to meet the demands of even the largest deployments.

In this blog post, we are excited to share the significant strides we have made in improving the UI performance of Elastic APM. Maintaining a snappy user interface can be a challenge when interactively summarizing the massive amounts of data needed to provide an overview of the performance for an entire enterprise-scale service inventory. We want to assure our customers that we have listened, taken action, and made notable architectural changes to elevate the scalability and maturity of our solution.

Architectural enhancements

Our journey began back in the 7.x series where we noticed that doing ad-hoc aggregations on raw transaction data put Elasticsearch® under a lot of pressure in large-scale environments. Since then, we’ve begun to pre-aggregate the transactions into transaction metrics during ingestion. This has helped to keep the performance of the UI relatively stable. Regardless of how busy the monitored application is and how many transaction events it is creating, we’re just querying pre-aggregated metrics that are stored at a constant rate. We’ve enabled the metrics-powered UI by default in 7.15.

However, when showing an inventory of a large number of services over large time ranges, the number of metric data points that need to be aggregated can still be large enough to cause performance issues. We also create a time series for each distinct set of dimensions. The dimensions include metadata, such as the transaction name and the host name. Our documentation includes a full list of all available dimensions. If there’s a very high number of unique transaction names, which could be a result of improper instrumentation (see docs for more details), this will create a lot of individual time series that will need to be aggregated when requesting a summary of the service’s overall performance. Global labels that are added to the APM Agent configuration are also added as dimensions to these metrics, and therefore they can also impact the number of time series. Refer to the FAQs section below for more details.

Within the 8.7 and 8.8 releases, we’ve addressed these challenges with the following architectural enhancements that aim to reduce the number of documents Elasticsearch needs to search and aggregate on-the-fly, resulting in faster response times:

  • Pre-aggregation of transaction metrics into service metrics. Instead of aggregating all distinct time series that are created for each individual transaction name on-the-fly for every user request, we’re already pre-aggregating a summary time series for each service during data ingestion. Depending on how many unique transaction names the services have, this reduces the number of documents Elasticsearch needs to look up and aggregate by a factor of typically 10–100. This is particularly useful for the service inventory and the service overview pages.
  • Pre-aggregation of all metrics into different levels of granularity. The APM UI chooses the most appropriate level of granularity, depending on the selected time range. In addition to the metrics that are stored at a 1-minute granularity, we’re also summarizing and storing metrics at a 10-minute and 60-minute granularity level. For example, when looking at a 7-day period, the 60-minute data stream is queried instead of the 1-minute one, resulting in 60x fewer documents for Elasticsearch to examine. This makes sure that all graphs are rendered quickly, even when looking at larger time ranges.
  • Safeguards on the number of unique transactions per service for which we are aggregating metrics. Our agents are designed to keep the cardinality of the transaction name low. But in the wild, we’ve seen some services that have a huge amount of unique transaction names. This used to cause performance problems in the UI because APM Server would create many time series that the UI needed to aggregate at query time. In order to protect APM Server from running out of memory when aggregating a large number of time series for each unique transaction name, metrics were published without aggregating when limits for the number of time series were reached. This resulted in a lot of individual metric documents that needed to be aggregated at query time. To address the problem, we've introduced a system where we aggregate metrics in a dedicated overflow bucket for each service when limits are reached. Refer to our documentation for more details.

The exact factor of the document count reduction depends on various conditions. But to get a feeling for a typical scenario, if your services, on average, have 10 instances, no instance-specific global labels, 100 unique transaction names each, and you’re looking at time ranges that can leverage the 60m granularity, you’d see a reduction of documents that Elasticsearch needs to aggregate by a factor of 180,000 (10 instances x 100 transaction names x 60m x 3 because we’re also collapsing the event.outcome dimension). While the response times of Elasticsearch aggregations isn’t exactly scaling linearly with the number of documents, there is a strong correlation.

FAQs

When upgrading to the latest version, will my old data also load faster?

Updating to 8.8 doesn’t immediately make the UI faster. Because the improvements are powered by pre-aggregations that APM Server is doing during ingestion, only new data will benefit from it. For that reason, you should also make sure to update APM Server as well. The UI can still display data that was ingested using an older version of the stack.

If the UI is based on metrics, can I still slice and dice using custom labels?

High cardinality analysis is a big strength of Elastic Observability, and this focus on pre-aggregated metrics does not compromise that in any way.

The UI implements a sophisticated fallback mechanism that uses service metrics, transaction metrics, or raw transaction events, depending on which filters are applied. We’re not creating metrics for each user.id, for example. But you can still filter the data by user.id and the UI will then use raw transaction events. Chances are that you’re looking at a narrow slice of data when filtering by a dimension that is not available on the pre-aggregated metrics, therefore aggregations on the raw data are typically very fast.

Note that all global labels that are added to the APM agent configuration are part of the dimension of the pre-aggregated metrics, with the exception of RUM (see more details in this issue).

Can I use the pre-aggregated metrics in custom dashboards?

Yes! If you use Lens and select the "APM" data view, you can filter on either metricset.name:service_transaction or metricset.name:transaction, depending on the level of detail you need. Transaction latency is captured in transaction.duration.histogram, and successful outcomes and failed outcomes are stored in event.success_count. If you don't need a distribution of values, you can also select the transaction.duration.summary field for your metric aggregations, which should be faster. If you want to calculate the failure rate, here's a Lens formula: 1 - (sum(event.success_count) / count(event.success_count)). Note that the only granularity supported here is 1m.

Do the additional metrics have an impact on the storage?

While we’re storing more metrics than before, and we’re storing all metrics in different levels of granularity, we were able to offset that by enabling synthetic source for all metric data streams. We’ve even increased the default retention for the metrics in the coarse-grained granularity levels, so that the 60m rollup data streams are now stored for 390 days. Please consult our documentation for more information about the different metric data streams.

Are there limits on the amount of time series that APM Server can aggregate?

APM Server performs pre-aggregations in memory, which is fast, but consumes a considerable amount of memory. There are limits in place to protect APM Server from running out of memory, and from 8.7, most of them scale with available memory by default, meaning that allocating more memory to APM Server will allow it to handle more unique pre-aggregation groups like services and transactions. These limits are described in APM Server Data Model docs.

On the APM Server roadmap, we have plans to move to a LSM-based approach where pre-aggregations are performed with the help of disks in order to reduce memory usage. This will enable APM Server to scale better with the input size and cardinality.

A common pitfall when working with pre-aggregations is to add instance-specific global labels to APM agents. This may exhaust the aggregation limits and cause metrics to be aggregated under the overflow bucket instead of the corresponding service. Therefore, make sure to follow the best practice of only adding a limited set of global labels to a particular service.

Validation

To validate the effectiveness of the new architecture, and to ensure that the accuracy of the data is not negatively affected, we prepared a test environment where we generated 35K+ transactions per minute in a timespan of 14 days resulting in approximately 850 million documents.

We’ve tested the queries that power our service inventory, the service overview, and the transaction details using different time ranges (1d, 7d, 14d). Across the board, we’ve seen orders of magnitude improvements. Particularly, queries across larger time ranges that benefit from using the coarse-grained metrics in addition to the pre-aggregated service metrics saw incredible reductions of the response time.

We’ve also validated that there’s no loss in accuracy when using the more coarse-grained metrics for larger time ranges.

Every environment will behave a bit differently, but we’re confident that the impressive improvements in response time will translate well to setups of even bigger scale.

Planned improvements

As mentioned in the FAQs section, the number of time series for transaction metrics can grow quickly, as it is the product of multiple dimensions. For example, given a service that runs on 100 hosts and has 100 transaction names that each have 4 transaction results, APM Server needs to track 40,000 (100 x 100 x 4) different time series for that service. This would even exceed the maximum per-service limit of 32,000 for APM Servers with 64GB of main memory.

As a result, the UI will show an entry for “Remaining Transactions” in the Service overview page. This tracks the transaction metrics for a service once it hits the limit. As a result, you may not see all transaction names of your service. It may also be that all distinct transaction names are listed, but that the transaction metrics for some of the instances of that service are combined in the “Remaining Transactions” category.

We’re currently considering restructuring the dimensions for the metrics to avoid that the combination of the dimensions for transaction name and service instance-specific dimensions (such as the host name) lead to an explosion of time series. Stay tuned for more details.

Conclusion

The architectural improvements we’ve delivered in the past releases provide a step-function in terms of the scalability and responsiveness of our UI. Instead of having to aggregate massive amounts of data on-the-fly as users are navigating through the user interface, we pre-aggregate the results for the most common queries as data is coming in. This ensures we have the answers ready before users have even asked their most frequently asked questions, while still being able to answer ad-hoc questions.

We are excited to continue supporting our community members as they push boundaries on their growth journey, providing them with a powerful and mature platform that can effortlessly handle the demands of the largest workloads. Elastic is committed to its mission to enable everyone to find the answers that matter. From all data. In real time. At scale.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.