analyzeredit

The values of analyzed string fields are passed through an analyzer to convert the string into a stream of tokens or terms. For instance, the string "The quick Brown Foxes." may, depending on which analyzer is used, be analyzed to the tokens: quick, brown, fox. These are the actual terms that are indexed for the field, which makes it possible to search efficiently for individual words within big blobs of text.

This analysis process needs to happen not just at index time, but also at query time: the query string needs to be passed through the same (or a similar) analyzer so that the terms that it tries to find are in the same format as those that exist in the index.

Elasticsearch ships with a number of pre-defined analyzers, which can be used without further configuration. It also ships with many character filters, tokenizers, and Token Filters which can be combined to configure custom analyzers per index.

Analyzers can be specified per-query, per-field or per-index. At index time, Elasticsearch will look for an analyzer in this order:

  • The analyzer defined in the field mapping.
  • An analyzer named default in the index settings.
  • The standard analyzer.

At query time, there are a few more layers:

  • The analyzer defined in a full-text query.
  • The search_analyzer defined in the field mapping.
  • The analyzer defined in the field mapping.
  • An analyzer named default_search in the index settings.
  • An analyzer named default in the index settings.
  • The standard analyzer.

The easiest way to specify an analyzer for a particular field is to define it in the field mapping, as follows:

PUT my_index
{
  "mappings": {
    "my_type": {
      "properties": {
        "text": { 
          "type": "string",
          "fields": {
            "english": { 
              "type":     "string",
              "analyzer": "english"
            }
          }
        }
      }
    }
  }
}

GET my_index/_analyze?field=text 
{
  "text": "The quick Brown Foxes."
}

GET my_index/_analyze?field=text.english 
{
  "text": "The quick Brown Foxes."
}

The text field uses the default standard analyzer`.

The text.english multi-field uses the english analyzer, which removes stop words and applies stemming.

This returns the tokens: [ the, quick, brown, foxes ].

This returns the tokens: [ quick, brown, fox ].

search_quote_analyzeredit

The search_quote_analyzer setting allows you to specify an analyzer for phrases, this is particularly useful when dealing with disabling stop words for phrase queries.

To disable stop words for phrases a field utilising three analyzer settings will be required:

  1. An analyzer setting for indexing all terms including stop words
  2. A search_analyzer setting for non-phrase queries that will remove stop words
  3. A search_quote_analyzer setting for phrase queries that will not remove stop words
PUT /my_index
{
   "settings":{
      "analysis":{
         "analyzer":{
            "my_analyzer":{ 
               "type":"custom",
               "tokenizer":"standard",
               "filter":[
                  "lowercase"
               ]
            },
            "my_stop_analyzer":{ 
               "type":"custom",
               "tokenizer":"standard",
               "filter":[
                  "lowercase",
                  "english_stop"
               ]
            }
         },
         "filter":{
            "english_stop":{
               "type":"stop",
               "stopwords":"_english_"
            }
         }
      }
   },
   "mappings":{
      "my_type":{
         "properties":{
            "title": {
               "type":"string",
               "analyzer":"my_analyzer", 
               "search_analyzer":"my_stop_analyzer", 
               "search_quote_analyzer":"my_analyzer" 
              }
            }
         }
      }
   }
}
PUT my_index/my_type/1
{
   "title":"The Quick Brown Fox"
}

PUT my_index/my_type/2
{
   "title":"A Quick Brown Fox"
}

GET my_index/my_type/_search
{
   "query":{
      "query_string":{
         "query":"\"the quick brown fox\"" 
      }
   }
}

my_analyzer analyzer which tokens all terms including stop words

my_stop_analyzer analyzer which removes stop words

analyzer setting that points to the my_analyzer analyzer which will be used at index time

search_analyzer setting that points to the my_stop_analyzer and removes stop words for non-phrase queries

search_quote_analyzer setting that points to the my_analyzer analyzer and ensures that stop words are not removed from phrase queries

Since the query is wrapped in quotes it is detected as a phrase query therefore the search_quote_analyzer kicks in and ensures the stop words are not removed from the query. The my_analyzer analyzer will then return the following tokens [the, quick, brown, fox] which will match one of the documents. Meanwhile term queries will be analyzed with the my_stop_analyzer analyzer which will filter out stop words. So a search for either The quick brown fox or A quick brown fox will return both documents since both documents contain the following tokens [quick, brown, fox]. Without the search_quote_analyzer it would not be possible to do exact matches for phrase queries as the stop words from phrase queries would be removed resulting in both documents matching.