MAXSCORE & block-max MAXSCORE: More skipping with block-max MAXSCORE

Learn about MAXSCORE, block-max MAXSCORE & WAND. Improve the MAXSCORE algorithm to evaluate disjunctive queries more like a conjunctive query.

Introduction to WAND and MAXSCORE

How do you quickly identify the top-k matches of a disjunctive query using an inverted index? This is the problem that the WAND1 and MAXSCORE2 algorithms try to solve. These two algorithms are based on the same idea: taking advantage of the maximum impact score (the maximum contribution of a particular term to the overall score) for each term, so as to skip hits whose score cannot possibly compare greater than the score of the current k-th top hit (referred to as "minimum competitive score" below). For instance, if you are searching for hits that contain the or fox, and the has a maximum contribution of 0.2 to the score while the minimum competitive score is 0.5, then there is no point in evaluating hits that only contain the anymore, as they have no chance of making it to the top hits.

However, WAND and MAXSCORE come with different performance characteristics: WAND typically evaluates fewer hits than MAXSCORE but with a higher per-hit overhead. This makes MAXSCORE generally perform better on high k values or with many terms - when skipping hits is hard, and WAND perform better otherwise.

Block-max MAXSCORE & MAXSCORE in Lucene

While Lucene first implemented a variant of WAND called block-max WAND, it later got attracted to the lower overhead of MAXSCORE and started using block-max MAXSCORE for top-level disjunctions in July 2022 (annotation EN in Lucene's nightly benchmarks). The MAXSCORE algorithm is rather simple: it sorts terms by increasing maximum impact score and partitions them into two groups: essential terms and non-essential terms. Non-essential terms are terms with low maximum impact scores whose sum of maximum scores is less than the minimum competitive score. Essential terms are all other terms. Essential terms are used to find candidate matches while non-essential terms are only used to compute the score of a candidate.

MAXSCORE usage example

Let's take an example: you are searching for the quick fox, and the maximum impact scores of each term are respectively 0.2 for the, 0.5 for quick and 1 for fox. As you start collecting hits, the minimum competitive score is 0, so all terms are essential and no terms are non-essential. Then at some point, the minimum competitive score reaches e.g. 0.3, meaning that a hit that only contains the has no chance of making it to the top-k hits. the moves from the set of essential terms to the set of non-essential terms, and the query effectively runs as (the) +(quick fox). The + sign here is used to express that a query clause is required, such as in Lucene's classic query parser. Said another way, from that point on, the query will only match hits that contain quick or fox and will only use the to compute the final score. The below table summarizes cases that MAXSCORE considers:

Minimum competitive score intervalQuery runs as
[0, 0.2]+(the quick fox)
(0.2, 0.7](the) +(quick fox)
(0.7, 1.7](the quick) +(fox)
(1.7, +Infty)No more matches

The last case happens when the minimum competitive score is greater than the sum of all maximum impact scores across all terms. It typically never happens with regular MAXSCORE, but may happen on some blocks with block-max MAXSCORE.

Improving MAXSCORE to intersect terms

Something that WAND does better than MAXSCORE is to progressively evaluate queries less and less as a disjunction and more and more as a conjunction as the minimum competitive score increases, which yields more skipping. This raised the question of whether MAXSCORE can be improved to also intersect terms? The answer is yes: for instance if the minimum competitive score is 1.3, then a hit cannot be competitive if it doesn't match both quick and fox. So we modified our block-max MAXSCORE implementation to consider the following cases instead:

Minimum competitive score intervalQuery runs as
[0, 0.2]+(the quick fox)
(0.2, 0.7](the) +(quick fox)
(0.7, 1.2](the quick) +(fox)
(1.2, 1.5](the) +quick +fox
(1.5, 1.7]+the +quick +fox
(1.7, +Infty)No more matches

Now the interesting question is whether these new cases we added are likely to occur or not? The answer depends on how good your score upper bounds are, your actual k value, whether terms actually have matches in common, etc., but it seems to kick in especially often in practice on queries that either have two terms, or that combine two high-scoring terms and zero or more low-scoring terms (e.g. stop words), such as the query we looked at in the above example. This is expected to cover a sizable number of queries in many query logs.

Implementing this optimization yielded a noticeable improvement on Lucene's nightly benchmarks (annotation FU), see OrHighHigh (11% speedup) and OrHighMed (6% speedup). It was released in Lucene 9.9 and should be included in Elasticsearch 8.12. We hope you'll enjoy the speedups!

Footnotes

  1. Broder, A. Z., Carmel, D., Herscovici, M., Soffer, A., & Zien, J. (2003, November). Efficient query evaluation using a two-level retrieval process. In Proceedings of the twelfth international conference on Information and knowledge management (pp. 426-434).

  1. Turtle, H., & Flood, J. (1995). Query evaluation: strategies and optimizations. Information Processing & Management, 31(6), 831-850.

Ready to try this out on your own? Start a free trial.

Elasticsearch and Lucene offer strong vector database and search capabilities. Dive into our sample notebooks to learn more.

Related content

Elasticsearch vs. OpenSearch: Vector Search Performance Comparison

Elasticsearch vs. OpenSearch: Vector Search Performance Comparison

Elasticsearch is out-of-the-box 2x–12x faster than OpenSearch for vector search

Bringing maximum-inner-product into Lucene

September 1, 2023

Bringing maximum-inner-product into Lucene

Explore how we brought maximum-inner-product into Lucene and the investigations undertaken to ensure its support.

Understanding Int4 scalar quantization in Lucene

April 25, 2024

Understanding Int4 scalar quantization in Lucene

This blog explains how int4 quantization works in Lucene, how it lines up, and the benefits of using int4 quantization.

Making Lucene faster with vectorization and FFI/madvise

April 17, 2024

Making Lucene faster with vectorization and FFI/madvise

Discover how modern Java features, including vectorization and FFI/madvise, are speeding up Lucene's performance.

Speeding Up Multi-graph Vector Search

March 12, 2024

Speeding Up Multi-graph Vector Search

Explore multi-graph vector search in Lucene and discover how sharing information between segment searches enhances search speed.

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. Elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself