WARNING: Version 2.1 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
A tokenizer of type
lowercase that performs the function of
Case Token Filter together. It divides text at non-letters and converts
them to lower case. While it is functionally equivalent to the
Case Token Filter, there is a performance advantage to doing the two
tasks at once, hence this (redundant) implementation.
Intro to Kibana
ELK for Logs & Metrics