This Week in Elasticsearch and Apache Lucene - 2019-02-22 | Elastic Blog

This Week in Elasticsearch and Apache Lucene - 2019-02-22

Elasticsearch Highlights

Zen2 at scale

We have been investigating Zen2's behaviour with large cluster states and/or large clusters, pushing past the boundaries of what we might consider to be a reasonable deployment and into the territory of the OOM killer. Pleasingly, we found that master election behaves reasonably well even with tens of master-eligible nodes. We fixed one memory-consuming issue ( #39179) and continue to look for other ways to bound the memory needed to publish 100+MB of cluster state to 50+ nodes all at once.

Performance

We have released Rally 1.0.4. We did the release mainly to ensure users can still benchmark in a world of typeless APIs as this will be the default in Elasticsearch 7.0.0.

After switching the default store type from mmapfs to hybridfs we have seen reduced performance in our nightly benchmarks (up to 20% less indexing throughput and significantly increased latency for some queries). The goal of that change was to avoid page cache thrashing for very large indices but for practical purposes (all benchmarks need to finish within a day) our nightly benchmarks use only rather "small" indices of up to 75GB. It turns out that Lucene uses a combined segment file format (.cfs) for smaller segments to save file handles which we accessed via NIO instead of memory-mapping them and that lead to significantly worse performance for the "small" indices we use in our nightly benchmarks. After discussions with the Lucene team and more benchmarking, we have now added cfs files to the list of files to memory-map. This restored original performance for small indices while retaining the performance benefit for large indices.

ILM (and CCR)

ILM is now fully integrated with CCR (#34648). The final commit to ensure the ILM and CCR work correctly together has been merged (#38529).

In order to replicate across clusters efficiently and correctly, CCR needs to keep a history of the operations performed on a leader shard - indexing, deletes, etc. When those operations are replicated to the follower shard, they'll be applied in a way that keeps everything consistent. If CCR doesn't have that history of operations, it has to fall back to file-based recovery, which can be much less efficient. During this time, there would not be any available shard copy on the follower side, making this a situation that we want to avoid.

Shard history retention leases allow the follower to be able to mark in the changes stream where it is, allowing the leader to keep all shard history after the operation specified in the lease to ensure that it doesn't have to be rebuilt from scratch via a file-based recovery.

ILM now pays attention to these leases and waits to perform operations that would necessarily destroy shard history, specifically the Shrink and Delete actions, if there are any leases put in place by followers, ILM will wait until those leases are released by the follower or the leases time out. This makes sure we keep all the shard history around so that it can be replicated to followers - if we didn't do this, we would risk losing the history of shard operations while a follower is still trying to replicate operations from the leader.

Search - Intervals

We exposed some additional Interval filters to the elasticsearch query DSL. These new operators overlapping, before and after add new ways to match intervals within documents.

DFS query

We worked on an improvement to the way DFSPhase builds distributed term statistics. This is part of a longer-term plan to possibly remove Weight.extractTerms() from Lucene.

ES Management UI

We also worked on refactoring our server logic to a common library and avoid loads of code duplication across our apps. This work inspired us to work on refactoring our License checker logic for our application. This is the first step towards improving our UX to guide our users when they don't have the correct license to access a plugin.

ODBC

We implemented the functionality to enable the new DATE data type conversions. With it, an application that queries a column of type DATE will have the data returned as native ODBC DATE data structure, with all components broken down, thus enabling faster operations on it. This PR also updates the currently advertised available standards scalars.

Apache Lucene

Query Visiting

Our team members are known for bringing up decade old issues after we learned a lot more what needs to be done, this time we are looking into Query Visitors to add a generic and flexible way to traverse Query trees. It's puzzling why these issues are originally assigned or opened by the same person.

Last Minute

We got in some last minute API breakage into 8.0 that makes delegation of TermsEnum much less trappy. This caused many issues in the past including enormous memory usage and slow retrieval.

Also coming in last minute we added On-Disk term dictionaries. FSTs are Lucene's wonder weapon when it gets to fast term lookups. This data-structure is loaded entirely into heap memory until 8.0. Now in 8.0 FST are read from disk if the file is memory mapped. It tries to detect if mmap directory is used and reads off-disk if the term statistics imply that the field is not a ID field. The performance numbers are on-par with in-memory for non-ID fields while saving significant amounts of memory.

Geo Land

We pushed another performance optimization on the BKD tree by making the heap objects more efficient. This change has a side effect on other indexing strategies (e.g BKD tree on 1 dimensional points) as the tree will always create one of those heap objects regardless the data that is going to work on. We opened another issue to create such objects only when they are needed. We also refactored the tests for LatLonShape as a step to facilitate the implementation of CONTAINS.

Changes in Elasticsearch

Changes in 8.0:

  • Fix the OS sensing code in ClusterFormationTasks 38457
  • Remove setting index.optimize_auto_generated_id (#27583) 27600

Changes in 7.1:

  • Distance measures for dense and sparse vectors 37947
  • Don't swallow IOExceptions in InternalTestCluster. 39068
  • BREAKING: Enforce Completion Context Limit 38675
  • Add overlapping, before, after filters to intervals query 38999
  • Tie break search shard iterator comparisons on cluster alias 38853

Changes in 7.0:

  • Remove nGram and edgeNGram token filter names (#38911) 39070
  • Extend nextDoc to delegate to the wrapped doc-value iterator for date_nanos 39176
  • Do not create the missing index when invoking getRole 39039
  • Don't close caches while there might still be in-flight requests. 38958
  • Blob store compression fix 39073
  • Fix libs:ssl-config project setup 39074
  • Fix #38623 remove xpack namespace REST API 38625
  • Also mmap cfs files for hybridfs 38940
  • Recover peers from translog, ignoring soft deletes 38904
  • Fix NPE on Stale Index in IndicesService 38891

Changes in 6.7:

  • ReadOnlyEngine should update translog recovery state information 39238
  • Align generated release notes with doc standards 39234
  • Rebuild remote connections on profile changes 37678
  • minor updates for user-agent ecs for 6.7 39213
  • Only create MatrixStatsResults on final reduction 38130
  • Link to 7.0 documentation in deprecation checks 39194
  • Ensure global test seed is used for all random testing tasks 38991
  • Bump jackson-databind version for AWS SDK 39183
  • Reduce refresh when lookup term in FollowingEngine 39184
  • Deprecate fallback to java on PATH 37990
  • Deprecate Hipchat Watcher actions 39160
  • Bump jackson-databind version for ingest-geoip 39182
  • Remove retention leases when unfollowing 39088
  • Resolve concurrency with watcher trigger service 39092
  • Allow retention lease operations under blocks 39089
  • Fix DateFormatters.parseMillis when no timezone is given 39100
  • Fix shard follow task startup error handling 39053
  • Specify include_type_name in HTTP monitoring. 38927
  • Introduce retention lease state file 39004
  • Generate mvn pom for ssl-config library 39019
  • Integrate retention leases to recovery from remote 38829
  • ShardBulkAction ignore primary response on primary 38901

Changes in 6.6:

  • SQL: add "validate.properties" property to JDBC's allowed list of settings 39050
  • SQL: enforce JDBC driver - ES server version parity 38972
  • Fix simple query string serialization conditional 38960
  • Advance max_seq_no before add operation to Lucene 38879

Changes in 6.5:

  • Build: Fix issue with test status logging 38799

Changes in Elasticsearch Management UI

Changes in 7.1:

  • [Rollup] Add unit tests for Job table 31561
  • [CCR] Add data-test-subj to forms and buttons 30325
  • [CCR] i18n feedback 30028

Changes in Elasticsearch SQL ODBC Driver

Changes in 7.1:

  • Enable DATE conversions 115

Changes in 6.7:

  • Test: driver uninstallation done now with wmic only 116

Changes in Rally

Changes in 1.0.4:

  • Make types optional 647