WARNING: Version 0.90 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Analyzers are composed of a single Tokenizer
and zero or more TokenFilters. The tokenizer may
be preceded by one or more CharFilters.
The analysis module allows you to register
Analyzers under logical
names which can then be referenced either in mapping definitions or in
Elasticsearch comes with a number of prebuilt analyzers which are ready to use. Alternatively, you can combine the built in character filters, tokenizers and token filters to create custom analyzers.
An analyzer is registered under a logical name. It can then be referenced from mapping definitions or certain APIs. When none are defined, defaults are used. There is an option to define which analyzers will be used by default when none can be derived.
default logical name allows one to configure an analyzer that will
be used both for indexing and for searching APIs. The
logical name can be used to configure a default analyzer that will be
used just when indexing, and the
default_search can be used to
configure a default analyzer that will be used just when searching.
Analyzers can be aliased to have several registered lookup names
associated with them. For example, the following will allow
standard analyzer to also be referenced with
index : analysis : analyzer : standard : alias: [alias1, alias2] type : standard stopwords : [test1, test2, test3]
Below is a list of the built in analyzers.