Package | Description |
---|---|
org.apache.lucene.analysis |
API and code to convert text into indexable/searchable tokens.
|
org.apache.lucene.analysis.ar |
Analyzer for Arabic.
|
org.apache.lucene.analysis.cjk |
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
|
org.apache.lucene.analysis.cn |
Analyzer for Chinese, which indexes unigrams (individual chinese characters).
|
org.apache.lucene.analysis.cn.smart |
Analyzer for Simplified Chinese, which indexes words.
|
org.apache.lucene.analysis.core |
Basic, general-purpose analysis components.
|
org.apache.lucene.analysis.in |
Analysis components for Indian languages.
|
org.apache.lucene.analysis.ngram |
Character n-gram tokenizers and filters.
|
org.apache.lucene.analysis.path |
Analysis components for path-like strings such as filenames.
|
org.apache.lucene.analysis.pattern |
Set of components for pattern-based (regex) analysis.
|
org.apache.lucene.analysis.ru |
Analyzer for Russian.
|
org.apache.lucene.analysis.standard |
Fast, general-purpose grammar-based tokenizers.
|
org.apache.lucene.analysis.th |
Analyzer for Thai.
|
org.apache.lucene.analysis.util |
Utility functions for text analysis.
|
org.apache.lucene.analysis.wikipedia |
Tokenizer that is aware of Wikipedia syntax.
|
Modifier and Type | Field and Description |
---|---|
protected Tokenizer |
Analyzer.TokenStreamComponents.source
Original source of the tokens.
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
Analyzer.TokenStreamComponents.getTokenizer()
Returns the component's
Tokenizer |
Constructor and Description |
---|
Analyzer.TokenStreamComponents(Tokenizer source)
Creates a new
Analyzer.TokenStreamComponents instance. |
Analyzer.TokenStreamComponents(Tokenizer source,
TokenStream result)
Creates a new
Analyzer.TokenStreamComponents instance. |
Modifier and Type | Class and Description |
---|---|
class |
ArabicLetterTokenizer
Deprecated.
(3.1) Use
StandardTokenizer instead. |
Modifier and Type | Class and Description |
---|---|
class |
CJKTokenizer
Deprecated.
Use StandardTokenizer, CJKWidthFilter, CJKBigramFilter, and LowerCaseFilter instead.
|
Modifier and Type | Class and Description |
---|---|
class |
ChineseTokenizer
Deprecated.
(3.1) Use
StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
Modifier and Type | Class and Description |
---|---|
class |
HMMChineseTokenizer
Tokenizer for Chinese or mixed Chinese-English text.
|
class |
SentenceTokenizer
Deprecated.
Use
HMMChineseTokenizer instead |
Modifier and Type | Method and Description |
---|---|
Tokenizer |
HMMChineseTokenizerFactory.create(AttributeSource.AttributeFactory factory,
Reader reader) |
Modifier and Type | Class and Description |
---|---|
class |
KeywordTokenizer
Emits the entire input as a single token.
|
class |
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
class |
LowerCaseTokenizer
LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together.
|
class |
WhitespaceTokenizer
A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
Modifier and Type | Class and Description |
---|---|
class |
IndicTokenizer
Deprecated.
(3.6) Use
StandardTokenizer instead. |
Modifier and Type | Class and Description |
---|---|
class |
EdgeNGramTokenizer
Tokenizes the input from an edge into n-grams of given size(s).
|
class |
Lucene43EdgeNGramTokenizer
Deprecated.
|
class |
Lucene43NGramTokenizer
Deprecated.
|
class |
NGramTokenizer
Tokenizes the input into n-grams of the given size(s).
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
EdgeNGramTokenizerFactory.create(AttributeSource.AttributeFactory factory,
Reader input) |
Tokenizer |
NGramTokenizerFactory.create(AttributeSource.AttributeFactory factory,
Reader input)
|
Modifier and Type | Class and Description |
---|---|
class |
PathHierarchyTokenizer
Tokenizer for path-like hierarchies.
|
class |
ReversePathHierarchyTokenizer
Tokenizer for domain-like hierarchies.
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
PathHierarchyTokenizerFactory.create(AttributeSource.AttributeFactory factory,
Reader input) |
Modifier and Type | Class and Description |
---|---|
class |
PatternTokenizer
This tokenizer uses regex pattern matching to construct distinct tokens
for the input stream.
|
Modifier and Type | Class and Description |
---|---|
class |
RussianLetterTokenizer
Deprecated.
(3.1) Use
StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
Modifier and Type | Class and Description |
---|---|
class |
ClassicTokenizer
A grammar-based tokenizer constructed with JFlex
|
class |
StandardTokenizer
A grammar-based tokenizer constructed with JFlex.
|
class |
UAX29URLEmailTokenizer
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in `
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
Modifier and Type | Class and Description |
---|---|
class |
ThaiTokenizer
Tokenizer that use
BreakIterator to tokenize Thai text. |
Modifier and Type | Method and Description |
---|---|
Tokenizer |
ThaiTokenizerFactory.create(AttributeSource.AttributeFactory factory,
Reader reader) |
Modifier and Type | Class and Description |
---|---|
class |
CharTokenizer
An abstract base class for simple, character-oriented tokenizers.
|
class |
SegmentingTokenizerBase
Breaks text into sentences with a
BreakIterator and
allows subclasses to decompose these sentences into words. |
Modifier and Type | Method and Description |
---|---|
abstract Tokenizer |
TokenizerFactory.create(AttributeSource.AttributeFactory factory,
Reader input)
Creates a TokenStream of the specified input using the given AttributeFactory
|
Tokenizer |
TokenizerFactory.create(Reader input)
Creates a TokenStream of the specified input using the default attribute factory.
|
Modifier and Type | Class and Description |
---|---|
class |
WikipediaTokenizer
Extension of StandardTokenizer that is aware of Wikipedia syntax.
|
Copyright © 2000-2015 The Apache Software Foundation. All Rights Reserved.