Compound Word Token Filteredit

The hyphenation_decompounder and dictionary_decompounder token filters can decompose compound words found in many German languages into word parts.

Both token filters require a dictionary of word parts, which can be provided as:


An array of words, specified inline in the token filter configuration, or


The path (either absolute or relative to the config directory) to a UTF-8 encoded file containing one word per line.

Hyphenation decompounderedit

The hyphenation_decompounder uses hyphenation grammars to find potential subwords that are then checked against the word dictionary. The quality of the output tokens is directly connected to the quality of the grammar file you use. For languages like German they are quite good.

XML based hyphenation grammar files can be found in the Objects For Formatting Objects (OFFO) Sourceforge project. You can download directly and look in the offo-hyphenation/hyph/ directory. Credits for the hyphenation code go to the Apache FOP project .

Dictionary decompounderedit

The dictionary_decompounder uses a brute force approach in conjuction with only the word dictionary to find subwords in a compound word. It is much slower than the hyphenation decompounder but can be used as a first start to check the quality of your dictionary.

Compound token filter parametersedit

The following parameters can be used to configure a compound word token filter:


Either dictionary_decompounder or hyphenation_decompounder.


A array containing a list of words to use for the word dictionary.


The path (either absolute or relative to the config directory) to the word dictionary.


The path (either absolute or relative to the config directory) to a FOP XML hyphenation pattern file. (required for hyphenation)


Minimum word size. Defaults to 5.


Minimum subword size. Defaults to 2.


Maximum subword size. Defaults to 15.


Whether to include only the longest matching subword or not. Defaults to false

Here is an example:

index :
    analysis :
        analyzer :
            myAnalyzer2 :
                type : custom
                tokenizer : standard
                filter : [myTokenFilter1, myTokenFilter2]
        filter :
            myTokenFilter1 :
                type : dictionary_decompounder
                word_list: [one, two, three]
            myTokenFilter2 :
                type : hyphenation_decompounder
                word_list_path: path/to/words.txt
                hyphenation_patterns_path: path/to/fop.xml
                max_subword_size : 22

Top Videos