This Week in Elasticsearch and Apache Lucene - 2017-02-27

Welcome to This Week in Elasticsearch and Apache Lucene! With this weekly series, we're bringing you an update on all things Elasticsearch and Apache Lucene at Elastic, including the latest on commits, releases and other learning resources.

Batched Search Reduce Phases

In the last 2 weeks we refactored several parts of our search layer to allow for better testablity as well as to add new features. With the addition of cross cluster search a couple of weeks ago the ability to search across many many shards became a much higher priority. With batched search reduce phases we added the first step towards removing the artificial soft-limit of 1000 shards per search request. We added the ability to reduce shard results in batches (by default 512 shards at once) to free up resources as soon as possible to prevent the high memory consumption that the soft limit was added to prevent. At this point only aggregations are reduced in batches but future work is already on the way and is expected to come in version 5.4.

Circuit breaker accounting leak

This week we saw a two day round-the-clock debugging spree around a Circuit Breaker accounting bug. We've been chasing reports of clusters not accepting requests due to the request circuit breaker before but couldn't pin them down. Last Monday, Jason Bryan signalled that his private cloud cluster displayed the same symptoms. Restarting the cluster made it accessible again but plotting the request circuit breaker value over time clearly showed a leak. Something was incrementing it with a few MB every 2.5 minutes. When those MBs consumed 70% of nodes memory, no request to the node could be made. Some more hours and late into the night we correlated it to the snapshotting logic in Cloud. That was puzzling as cloud only snapshots every 30 minutes. More digging led to the discovery that the cloud call to list the snapshots times out, breaks the connection and then immediately tries another time. That explained the faster cycle but we still had no clue what was happening on the Elasticsearch side. Since it was our own cluster, Jason Tedor built a custom jar with some debugging logic and the results were surprising - it wasn't snapshot related at all. If the client closed a connection to the REST layer before Elasticsearch could respond, we would fail to mark the request resources as freed. The snapshot listing call's only fault was being slow... the cluster had 426 snapshots in it, each with many indices. S3 was just slow to read (>5m some times). This issue was fixed and will be part of the imminent 5.2.2 release. We deployed the build candidate to Jason's cluster and confirmed that the leak is gone. Thanks again to Jason Bryan and the cloud team for working with us and jumping through hoops to get this resolved.

New Rally nested track

Rally got a new track called "Nested" which indexes a subset of a Stackoverflow dump using nested documents. It runs nested queries, nested aggregations, nested sorts, as well as simple queries that do not leverage the nested structure, which still have to mask nested docs, so hopefully we should be better informed of performance improvements and regressions related to the use of nested documents in the future.

Completion suggester learns to deduplicate suggestions

Lucene's near-real-time document based suggester, exposed as the new completion suggester and context suggester in Elasticsearch 5.x, is a powerful auto-suggest implementation, differentiated because it respects deleted documents and can apply filters. It also supports an analyzer to normalize the different ways users type what are in fact the same suggestion which can be very powerful. However, it was missing duplicate removal, which is a big limitation for uses cases such as suggesting author names from your index when prolific authors may written many documents. Under the hood, the suggester builds a per-segment finite-state transducer (FST), where each path is first the analyzed suggestion string, followed by a vInt encoding of the document ID. This means that the FST has already done the hard part for deduplication: all duplicates will share a single path through up until the document ID, at which point it will branch out to all the many documents with that suggestion. We've now taken advantage of that to efficiently prune partial that can only lead to duplicate suggestions so that in Lucene 6.5.0, Elasticsearch 5.4.0 we will have the option to remove duplicates.

Changes in 5.3:

Changes in 5.x:

Changes in master:

Coming up:

Apache Lucene

Watch This Space

Stay tuned to this blog, where we'll share more news on the whole Elastic ecosystem including news, learning resources and cool use cases!