Uses of Class
org.apache.lucene.util.AttributeSource

Packages that use AttributeSource
Package
Description
Text analysis.
Analyzer for Arabic.
Analyzer for Bulgarian.
Analyzer for Bengali Language.
Provides various convenience classes for creating boosts on Tokens.
Analyzer for Brazilian Portuguese.
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
Analyzer for Sorani Kurdish.
Fast, general-purpose grammar-based tokenizers.
Analyzer for Simplified Chinese, which indexes words.
Construct n-grams for frequently occurring terms and phrases.
A filter that decomposes compound words you find in many Germanic languages into the word parts.
Basic, general-purpose analysis components.
Analyzer for Czech.
Analyzer for German.
Analyzer for Greek.
Fast, general-purpose URLs and email addresses tokenizers.
Analyzer for English.
Analyzer for Spanish.
Analyzer for Persian.
Analyzer for Finnish.
Analyzer for French.
Analyzer for Irish.
Analyzer for Galician.
Analyzer for Hindi.
Analyzer for Hungarian.
A Java implementation of Hunspell stemming and spell-checking algorithms (Hunspell), and a stemming TokenFilter (HunspellStemFilter) based on it.
Analysis components based on ICU
Tokenizer that breaks text into words with the Unicode Text Segmentation algorithm.
Analyzer for Indonesian.
Analyzer for Indian languages.
Analyzer for Italian.
Analyzer for Japanese.
Analyzer for Korean.
Analyzer for Latvian.
MinHash filtering (for LSH).
Miscellaneous Tokenstreams.
Character n-gram tokenizers and filters.
Analyzer for Norwegian.
Analysis components for path-like strings such as filenames.
Set of components for pattern-based (regex) analysis.
Provides various convenience classes for creating payloads on Tokens.
Analysis components for phonetic search.
Analyzer for Portuguese.
Filter to reverse token text.
Analyzer for Russian.
Word n-gram filters.
TokenFilter and Analyzer implementations that use a modified version of Snowball stemmers.
Analyzer for Serbian.
Fast, general-purpose grammar-based tokenizer StandardTokenizer implements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.
Stempel: Algorithmic Stemmer
Analyzer for Swedish.
Analysis components for Synonyms.
Analysis components for Synonyms using Word2Vec model.
Analyzer for Telugu Language.
Analyzer for Thai.
Analyzer for Turkish.
Utility functions for text analysis.
Tokenizer that is aware of Wikipedia syntax.
Codecs API: API for customization of the encoding and structure of the index.
Pluggable term index / block terms dictionary implementations.
The logical representation of a Document for indexing and searching.
Taxonomy index implementation using on top of a Directory.
Code to maintain and access indices.
Monitoring framework
Code to search indices.
Highlighting search terms.
Support for index-time and query-time joins.
Analyzer based autosuggest.
Support for document suggestion
The UnifiedHighlighter -- a flexible highlighter that can get offsets from postings, term vectors, or analysis.
Some utility classes.
Utility classes for working with token streams as graphs.