• Webinars
  • Docs
  • Download
  • Blogs
  • Contact Us
Try Free
Show / Hide Table of Contents

Namespace Lucene.Net.Search

Classes

AssertingBulkOutOfOrderScorer

A crazy that wraps another but shuffles the order of the collected documents.

AssertingBulkScorer

Wraps a Scorer with additional checks

AssertingCollector

Wraps another Collector and checks that acceptsDocsOutOfOrder is respected.

AssertingIndexSearcher

Helper class that adds some extra checks to ensure correct usage of and .

AssertingQuery

Assertion-enabled query.

AssertingScorer

Wraps a Scorer with additional checks

AutomatonQuery

A Query that will match terms against a finite-state machine.

This query will match documents that contain terms accepted by a given finite-state machine. The automaton can be constructed with the Lucene.Net.Util.Automaton API. Alternatively, it can be created from a regular expression with RegexpQuery or from the standard Lucene wildcard syntax with WildcardQuery.

When the query is executed, it will create an equivalent DFA of the finite-state machine, and will enumerate the term dictionary in an intelligent way to reduce the number of comparisons. For example: the regular expression of [dl]og? will make approximately four comparisons: do, dog, lo, and log.

@lucene.experimental

BitsFilteredDocIdSet

This implementation supplies a filtered DocIdSet, that excludes all docids which are not in a IBits instance. This is especially useful in Filter to apply the Lucene.Net.Search.BitsFilteredDocIdSet.acceptDocs passed to GetDocIdSet(AtomicReaderContext, IBits) before returning the final DocIdSet.

BooleanClause

A clause in a BooleanQuery.

BooleanQuery

A Query that matches documents matching boolean combinations of other queries, e.g. TermQuerys, PhraseQuerys or other BooleanQuerys.

Collection initializer note: To create and populate a BooleanQuery in a single statement, you can use the following example as a guide:

var booleanQuery = new BooleanQuery() {
    { new WildcardQuery(new Term("field2", "foobar")), Occur.SHOULD },
    { new MultiPhraseQuery() {
        new Term("field", "microsoft"), 
        new Term("field", "office")
    }, Occur.SHOULD }
};

// or

var booleanQuery = new BooleanQuery() {
    new BooleanClause(new WildcardQuery(new Term("field2", "foobar")), Occur.SHOULD),
    new BooleanClause(new MultiPhraseQuery() {
        new Term("field", "microsoft"), 
        new Term("field", "office")
    }, Occur.SHOULD)
};

BooleanQuery.BooleanWeight

Expert: the Weight for BooleanQuery, used to normalize, score and explain these queries.

@lucene.experimental

BooleanQuery.TooManyClausesException

Thrown when an attempt is made to add more than MaxClauseCount clauses. This typically happens if a PrefixQuery, FuzzyQuery, WildcardQuery, or TermRangeQuery is expanded to many terms during search.

BoostAttribute

Implementation class for IBoostAttribute.

@lucene.internal

BulkScorer

This class is used to score a range of documents at once, and is returned by GetBulkScorer(AtomicReaderContext, Boolean, IBits). Only queries that have a more optimized means of scoring across a range of documents need to override this. Otherwise, a default implementation is wrapped around the Scorer returned by GetScorer(AtomicReaderContext, IBits).

CachingCollector

Caches all docs, and optionally also scores, coming from a search, and is then able to replay them to another collector. You specify the max RAM this class may use. Once the collection is done, call IsCached. If this returns true, you can use Replay(ICollector) against a new collector. If it returns false, this means too much RAM was required and you must instead re-run the original search.

NOTE: this class consumes 4 (or 8 bytes, if scoring is cached) per collected document. If the result set is large this can easily be a very substantial amount of RAM!

NOTE: this class caches at least 128 documents before checking RAM limits.

See the Lucene modules/grouping module for more details including a full code example.

@lucene.experimental

CachingWrapperFilter

Wraps another Filter's result and caches it. The purpose is to allow filters to simply filter, and then wrap with this class to add caching.

CheckHits

Utility class for asserting expected hits in tests.

CheckHits.ExplanationAsserter

CheckHits.ExplanationAssertingSearcher

CheckHits.SetCollector

Just collects document ids into a set.

CollectionStatistics

Contains statistics for a collection (field)

@lucene.experimental

CollectionTerminatedException

Throw this exception in Collect(Int32) to prematurely terminate collection of the current leaf.

Note: IndexSearcher swallows this exception and never re-throws it. As a consequence, you should not catch it when calling any overload of Search(Weight, FieldDoc, Int32, Sort, Boolean, Boolean, Boolean) as it is unnecessary and might hide misuse of this exception.

Collector

LUCENENET specific class used to hold the NewAnonymous(Action<Scorer>, Action<Int32>, Action<AtomicReaderContext>, Func<Boolean>) static method.

ComplexExplanation

Expert: Describes the score computation for document and query, and can distinguish a match independent of a positive value.

ConstantScoreAutoRewrite

A rewrite method that tries to pick the best constant-score rewrite method based on term and document counts from the query. If both the number of terms and documents is small enough, then CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE is used. Otherwise, CONSTANT_SCORE_FILTER_REWRITE is used.

ConstantScoreQuery

A query that wraps another query or a filter and simply returns a constant score equal to the query boost for every document that matches the filter or query. For queries it therefore simply strips of all scores and returns a constant one.

ConstantScoreQuery.ConstantBulkScorer

We return this as our BulkScorer so that if the CSQ wraps a query with its own optimized top-level scorer (e.g. Lucene.Net.Search.BooleanScorer) we can use that top-level scorer.

ConstantScoreQuery.ConstantScorer

ConstantScoreQuery.ConstantWeight

ControlledRealTimeReopenThread<T>

Utility class that runs a thread to manage periodic reopens of a ReferenceManager<G>, with methods to wait for a specific index changes to become visible. To use this class you must first wrap your IndexWriter with a TrackingIndexWriter and always use it to make changes to the index, saving the returned generation. Then, when a given search request needs to see a specific index change, call the WaitForGeneration(Int64) to wait for that change to be visible. Note that this will only scale well if most searches do not need to wait for a specific index generation.

@lucene.experimental

DisjunctionMaxQuery

A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries. This is useful when searching for a word in multiple fields with different boost factors (so that the fields cannot be combined equivalently into a single search field). We want the primary score to be the one associated with the highest boost, not the sum of the field scores (as BooleanQuery would give).

If the query is "albino elephant" this ensures that "albino" matching one field and "elephant" matching another gets a higher score than "albino" matching both fields.

To get this result, use both BooleanQuery and DisjunctionMaxQuery: for each term a DisjunctionMaxQuery searches for it in each field, while the set of these DisjunctionMaxQuery's is combined into a BooleanQuery. The tie breaker capability allows results that include the same term in multiple fields to be judged better than results that include this term in only the best of those multiple fields, without confusing this with the better case of two different terms in the multiple fields.

Collection initializer note: To create and populate a DisjunctionMaxQuery in a single statement, you can use the following example as a guide:

var disjunctionMaxQuery = new DisjunctionMaxQuery(0.1f) {
    new TermQuery(new Term("field1", "albino")), 
    new TermQuery(new Term("field2", "elephant"))
};

DisjunctionMaxQuery.DisjunctionMaxWeight

Expert: the Weight for DisjunctionMaxQuery, used to normalize, score and explain these queries.

NOTE: this API and implementation is subject to change suddenly in the next release.

DocIdSet

A DocIdSet contains a set of doc ids. Implementing classes must only implement GetIterator() to provide access to the set.

DocIdSetIterator

This abstract class defines methods to iterate over a set of non-decreasing doc ids. Note that this class assumes it iterates on doc Ids, and therefore NO_MORE_DOCS is set to in order to be used as a sentinel object. Implementations of this class are expected to consider as an invalid value.

DocTermOrdsRangeFilter

A range filter built on top of a cached multi-valued term field (in IFieldCache).

Like FieldCacheRangeFilter, this is just a specialized range query versus using a TermRangeQuery with DocTermOrdsRewriteMethod: it will only do two ordinal to term lookups.

DocTermOrdsRewriteMethod

Rewrites MultiTermQuerys into a filter, using DocTermOrds for term enumeration.

This can be used to perform these queries against an unindexed docvalues field.

@lucene.experimental

DummyComparer

Explanation

Expert: Describes the score computation for document and query.

ExplanationAsserter

Asserts that the score explanation for every document matching a query corresponds with the true score.

NOTE: this HitCollector should only be used with the Query and IndexSearcher specified at when it is constructed.

ExplanationAssertingSearcher

An IndexSearcher that implicitly checks hte explanation of every match whenever it executes a search.

FCInvisibleMultiReader

This is a MultiReader that can be used for randomly wrapping other readers without creating FieldCache insanity. The trick is to use an opaque/fake cache key.

FieldCache

FieldCache.Bytes

Field values as 8-bit signed bytes

FieldCache.CacheEntry

EXPERT: A unique Identifier/Description for each item in the IFieldCache. Can be useful for logging/debugging.

@lucene.experimental

FieldCache.CreationPlaceholder

Placeholder indicating creation of this cache is currently in-progress.

FieldCache.Doubles

Field values as 64-bit doubles

FieldCache.Int16s

Field values as 16-bit signed shorts

NOTE: This was Shorts in Lucene

FieldCache.Int32s

Field values as 32-bit signed integers

NOTE: This was Ints in Lucene

FieldCache.Int64s

Field values as 64-bit signed long integers

NOTE: This was Longs in Lucene

FieldCache.Singles

Field values as 32-bit floats

NOTE: This was Floats in Lucene

FieldCacheDocIdSet

Base class for DocIdSet to be used with IFieldCache. The implementation of its iterator is very stupid and slow if the implementation of the MatchDoc(Int32) method is not optimized, as iterators simply increment the document id until MatchDoc(Int32) returns true. Because of this MatchDoc(Int32) must be as fast as possible and in no case do any I/O.

@lucene.internal

FieldCacheRangeFilter

A range filter built on top of a cached single term field (in IFieldCache).

FieldCacheRangeFilter builds a single cache for the field the first time it is used. Each subsequent FieldCacheRangeFilter on the same field then reuses this cache, even if the range itself changes.

this means that FieldCacheRangeFilter is much faster (sometimes more than 100x as fast) as building a TermRangeFilter, if using a NewStringRange(String, String, String, Boolean, Boolean). However, if the range never changes it is slower (around 2x as slow) than building a CachingWrapperFilter on top of a single TermRangeFilter.

For numeric data types, this filter may be significantly faster than NumericRangeFilter. Furthermore, it does not need the numeric values encoded by Int32Field, SingleField, Int64Field or DoubleField. But it has the problem that it only works with exact one value/document (see below).

As with all IFieldCache based functionality, FieldCacheRangeFilter is only valid for fields which exact one term for each document (except for NewStringRange(String, String, String, Boolean, Boolean) where 0 terms are also allowed). Due to a restriction of IFieldCache, for numeric ranges all terms that do not have a numeric value, 0 is assumed.

Thus it works on dates, prices and other single value fields but will not work on regular text fields. It is preferable to use a NOT_ANALYZED field to ensure that there is only a single term.

This class does not have an constructor, use one of the static factory methods available, that create a correct instance for different data types supported by IFieldCache.

FieldCacheRangeFilter<T>

FieldCacheRewriteMethod

Rewrites MultiTermQuerys into a filter, using the IFieldCache for term enumeration.

This can be used to perform these queries against an unindexed docvalues field.

@lucene.experimental

FieldCacheTermsFilter

A Filter that only accepts documents whose single term value in the specified field is contained in the provided set of allowed terms.

This is the same functionality as TermsFilter (from queries/), except this filter requires that the field contains only a single term for all documents. Because of drastically different implementations, they also have different performance characteristics, as described below.

The first invocation of this filter on a given field will be slower, since a SortedDocValues must be created. Subsequent invocations using the same field will re-use this cache. However, as with all functionality based on IFieldCache, persistent RAM is consumed to hold the cache, and is not freed until the IndexReader is disposed. In contrast, TermsFilter has no persistent RAM consumption.

With each search, this filter translates the specified set of Terms into a private FixedBitSet keyed by term number per unique IndexReader (normally one reader per segment). Then, during matching, the term number for each docID is retrieved from the cache and then checked for inclusion using the FixedBitSet. Since all testing is done using RAM resident data structures, performance should be very fast, most likely fast enough to not require further caching of the DocIdSet for each possible combination of terms. However, because docIDs are simply scanned linearly, an index with a great many small documents may find this linear scan too costly.

In contrast, TermsFilter builds up a FixedBitSet, keyed by docID, every time it's created, by enumerating through all matching docs using DocsEnum to seek and scan through each term's docID list. While there is no linear scan of all docIDs, besides the allocation of the underlying array in the FixedBitSet, this approach requires a number of "disk seeks" in proportion to the number of terms, which can be exceptionally costly when there are cache misses in the OS's IO cache.

Generally, this filter will be slower on the first invocation for a given field, but subsequent invocations, even if you change the allowed set of Terms, should be faster than TermsFilter, especially as the number of Terms being matched increases. If you are matching only a very small number of terms, and those terms in turn match a very small number of documents, TermsFilter may perform faster.

Which filter is best is very application dependent.

FieldComparer

FieldComparer.ByteComparer

Parses field's values as (using GetBytes(AtomicReader, String, FieldCache.IByteParser, Boolean) and sorts by ascending value

FieldComparer.DocComparer

Sorts by ascending docID

FieldComparer.DoubleComparer

Parses field's values as (using GetDoubles(AtomicReader, String, FieldCache.IDoubleParser, Boolean) and sorts by ascending value

FieldComparer.Int16Comparer

Parses field's values as (using GetInt16s(AtomicReader, String, FieldCache.IInt16Parser, Boolean) and sorts by ascending value

NOTE: This was ShortComparator in Lucene

FieldComparer.Int32Comparer

Parses field's values as (using GetInt32s(AtomicReader, String, FieldCache.IInt32Parser, Boolean) and sorts by ascending value

NOTE: This was IntComparator in Lucene

FieldComparer.Int64Comparer

Parses field's values as (using GetInt64s(AtomicReader, String, FieldCache.IInt64Parser, Boolean) and sorts by ascending value

NOTE: This was LongComparator in Lucene

FieldComparer.NumericComparer<T>

Base FieldComparer class for numeric types

FieldComparer.RelevanceComparer

Sorts by descending relevance. NOTE: if you are sorting only by descending relevance and then secondarily by ascending docID, performance is faster using TopScoreDocCollector directly (which all overloads of Search(Query, Int32) use when no Sort is specified).

FieldComparer.SingleComparer

Parses field's values as (using GetSingles(AtomicReader, String, FieldCache.ISingleParser, Boolean) and sorts by ascending value

NOTE: This was FloatComparator in Lucene

FieldComparer.TermOrdValComparer

Sorts by field's natural Term sort order, using ordinals. This is functionally equivalent to FieldComparer.TermValComparer, but it first resolves the string to their relative ordinal positions (using the index returned by GetTermsIndex(AtomicReader, String, Single)), and does most comparisons using the ordinals. For medium to large results, this comparer will be much faster than FieldComparer.TermValComparer. For very small result sets it may be slower.

FieldComparer.TermValComparer

Sorts by field's natural Term sort order. All comparisons are done using CompareTo(BytesRef), which is slow for medium to large result sets but possibly very fast for very small results sets.

FieldComparer<T>

Expert: a FieldComparer compares hits so as to determine their sort order when collecting the top results with TopFieldCollector. The concrete public FieldComparer classes here correspond to the SortField types.

This API is designed to achieve high performance sorting, by exposing a tight interaction with FieldValueHitQueue as it visits hits. Whenever a hit is competitive, it's enrolled into a virtual slot, which is an ranging from 0 to numHits-1. The FieldComparer is made aware of segment transitions during searching in case any internal state it's tracking needs to be recomputed during these transitions.

A comparer must define these functions:

  • Compare(Int32, Int32) Compare a hit at 'slot a' with hit 'slot b'.
  • SetBottom(Int32)This method is called by FieldValueHitQueue to notify the FieldComparer of the current weakest ("bottom") slot. Note that this slot may not hold the weakest value according to your comparer, in cases where your comparer is not the primary one (ie, is only used to break ties from the comparers before it).
  • CompareBottom(Int32)Compare a new hit (docID) against the "weakest" (bottom) entry in the queue.
  • SetTopValue(Object)This method is called by TopFieldCollector to notify the FieldComparer of the top most value, which is used by future calls to CompareTop(Int32).
  • CompareTop(Int32)Compare a new hit (docID) against the top value previously set by a call to SetTopValue(Object).
  • Copy(Int32, Int32)Installs a new hit into the priority queue. The FieldValueHitQueue calls this method when a new hit is competitive.
  • SetNextReader(AtomicReaderContext)Invoked when the search is switching to the next segment. You may need to update internal state of the comparer, for example retrieving new values from the IFieldCache.
  • Item[Int32]Return the sort value stored in the specified slot. This is only called at the end of the search, in order to populate Fields when returning the top results.

@lucene.experimental

FieldComparerSource

Provides a FieldComparer for custom field sorting.

@lucene.experimental

FieldDoc

Expert: A ScoreDoc which also contains information about how to sort the referenced document. In addition to the document number and score, this object contains an array of values for the document from the field(s) used to sort. For example, if the sort criteria was to sort by fields "a", "b" then "c", the fields object array will have three elements, corresponding respectively to the term values for the document in fields "a", "b" and "c". The class of each element in the array will be either , or depending on the type of values in the terms of each field.

Created: Feb 11, 2004 1:23:38 PM

@since lucene 1.4

FieldValueFilter

A Filter that accepts all documents that have one or more values in a given field. this Filter request IBits from the IFieldCache and build the bits if not present.

FieldValueHitQueue

FieldValueHitQueue.Entry

FieldValueHitQueue<T>

Expert: A hit queue for sorting by hits by terms in more than one field. Uses FieldCache.DEFAULT for maintaining internal term lookup tables.

@lucene.experimental @since 2.9

Filter

Abstract base class for restricting which documents may be returned during searching.

FilteredDocIdSet

Abstract decorator class for a DocIdSet implementation that provides on-demand filtering/validation mechanism on a given DocIdSet.

Technically, this same functionality could be achieved with ChainedFilter (under queries/), however the benefit of this class is it never materializes the full bitset for the filter. Instead, the Match(Int32) method is invoked on-demand, per docID visited during searching. If you know few docIDs will be visited, and the logic behind Match(Int32) is relatively costly, this may be a better way to filter than ChainedFilter.

FilteredDocIdSetIterator

Abstract decorator class of a DocIdSetIterator implementation that provides on-demand filter/validation mechanism on an underlying DocIdSetIterator. See DocIdSetIterator.

FilteredQuery

A query that applies a filter to the results of another query.

Note: the bits are retrieved from the filter each time this query is used in a search - use a CachingWrapperFilter to avoid regenerating the bits every time.

@since 1.4

FilteredQuery.FilterStrategy

Abstract class that defines how the filter (DocIdSet) applied during document collection.

FilteredQuery.RandomAccessFilterStrategy

A FilteredQuery.FilterStrategy that conditionally uses a random access filter if the given DocIdSet supports random access (returns a non-null value from Bits) and UseRandomAccess(IBits, Int32) returns

true
. Otherwise this strategy falls back to a "zig-zag join" (

LEAP_FROG_FILTER_FIRST_STRATEGY) strategy .

FuzzyQuery

Implements the fuzzy search query. The similarity measurement is based on the Damerau-Levenshtein (optimal string alignment) algorithm, though you can explicitly choose classic Levenshtein by passing false to the transpositions parameter.

this query uses MultiTermQuery.TopTermsScoringBooleanQueryRewrite as default. So terms will be collected and scored according to their edit distance. Only the top terms are used for building the BooleanQuery. It is not recommended to change the rewrite mode for fuzzy queries.

At most, this query will match terms up to MAXIMUM_SUPPORTED_DISTANCE edits. Higher distances (especially with transpositions enabled), are generally not useful and will match a significant amount of the term dictionary. If you really want this, consider using an n-gram indexing technique (such as the SpellChecker in the suggest module) instead.

NOTE: terms of length 1 or 2 will sometimes not match because of how the scaled distance between two terms is computed. For a term to match, the edit distance between the terms must be less than the minimum length term (either the input term, or the candidate term). For example, FuzzyQuery on term "abcd" with maxEdits=2 will not match an indexed term "ab", and FuzzyQuery on term "a" with maxEdits=2 will not match an indexed term "abc".

FuzzyTermsEnum

Subclass of TermsEnum for enumerating all terms that are similar to the specified filter term.

Term enumerations are always ordered by Comparer. Each term in the enumeration is greater than all that precede it.

FuzzyTermsEnum.LevenshteinAutomataAttribute

Stores compiled automata as a list (indexed by edit distance)

@lucene.internal

IndexSearcher

Implements search over a single IndexReader.

Applications usually need only call the inherited Search(Query, Int32) or Search(Query, Filter, Int32) methods. For performance reasons, if your index is unchanging, you should share a single instance across multiple searches instead of creating a new one per-search. If your index has changed and you wish to see the changes reflected in searching, you should use OpenIfChanged(DirectoryReader) to obtain a new reader and then create a new from that. Also, for low-latency turnaround it's best to use a near-real-time reader (Open(IndexWriter, Boolean)). Once you have a new IndexReader, it's relatively cheap to create a new from it.

NOTE: instances are completely thread safe, meaning multiple threads can call any of its methods, concurrently. If your application requires external synchronization, you should not synchronize on the instance; use your own (non-Lucene) objects instead.

IndexSearcher.LeafSlice

A class holding a subset of the s leaf contexts to be executed within a single thread.

@lucene.experimental

LiveFieldValues<S, T>

Tracks live field values across NRT reader reopens. This holds a map for all updated ids since the last reader reopen. Once the NRT reader is reopened, it prunes the map. This means you must reopen your NRT reader periodically otherwise the RAM consumption of this class will grow unbounded!

NOTE: you must ensure the same id is never updated at the same time by two threads, because in this case you cannot in general know which thread "won".

MatchAllDocsQuery

A query that matches all documents.

MaxNonCompetitiveBoostAttribute

Implementation class for IMaxNonCompetitiveBoostAttribute.

@lucene.internal

MultiCollector

A ICollector which allows running a search with several ICollectors. It offers a static Wrap(ICollector[]) method which accepts a list of collectors and wraps them with MultiCollector, while filtering out the null ones.

MultiPhraseQuery

MultiPhraseQuery is a generalized version of PhraseQuery, with an added method Add(Term[]).

To use this class, to search for the phrase "Microsoft app*" first use Add(Term) on the term "Microsoft", then find all terms that have "app" as prefix using MultiFields.GetFields(IndexReader).GetTerms(string), and use Add(Term[]) to add them to the query.

Collection initializer note: To create and populate a MultiPhraseQuery in a single statement, you can use the following example as a guide:

var multiPhraseQuery = new MultiPhraseQuery() {
    new Term("field", "microsoft"), 
    new Term("field", "office")
};

Note that as long as you specify all of the parameters, you can use either Add(Term), Add(Term[]), or Add(Term[], Int32) as the method to use to initialize. If there are multiple parameters, each parameter set must be surrounded by curly braces.

MultiTermQuery

An abstract Query that matches documents containing a subset of terms provided by a FilteredTermsEnum enumeration.

This query cannot be used directly; you must subclass it and define GetTermsEnum(Terms, AttributeSource) to provide a FilteredTermsEnum that iterates through the terms to be matched.

NOTE: if MultiTermRewriteMethod is either CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE or SCORING_BOOLEAN_QUERY_REWRITE, you may encounter a BooleanQuery.TooManyClausesException exception during searching, which happens when the number of terms to be searched exceeds MaxClauseCount. Setting MultiTermRewriteMethod to CONSTANT_SCORE_FILTER_REWRITE prevents this.

The recommended rewrite method is CONSTANT_SCORE_AUTO_REWRITE_DEFAULT: it doesn't spend CPU computing unhelpful scores, and it tries to pick the most performant rewrite method given the query. If you need scoring (like , use MultiTermQuery.TopTermsScoringBooleanQueryRewrite which uses a priority queue to only collect competitive terms and not hit this limitation.

Note that QueryParsers.Classic.QueryParser produces MultiTermQuerys using CONSTANT_SCORE_AUTO_REWRITE_DEFAULT by default.

MultiTermQuery.RewriteMethod

Abstract class that defines how the query is rewritten.

MultiTermQuery.TopTermsBoostOnlyBooleanQueryRewrite

A rewrite method that first translates each term into SHOULD clause in a BooleanQuery, but the scores are only computed as the boost.

This rewrite method only uses the top scoring terms so it will not overflow the boolean max clause count.

MultiTermQuery.TopTermsScoringBooleanQueryRewrite

A rewrite method that first translates each term into SHOULD clause in a BooleanQuery, and keeps the scores as computed by the query.

This rewrite method only uses the top scoring terms so it will not overflow the boolean max clause count. It is the default rewrite method for FuzzyQuery.

MultiTermQueryWrapperFilter<Q>

A wrapper for MultiTermQuery, that exposes its functionality as a Filter.

MultiTermQueryWrapperFilter<Q> is not designed to be used by itself. Normally you subclass it to provide a Filter counterpart for a MultiTermQuery subclass.

For example, TermRangeFilter and PrefixFilter extend MultiTermQueryWrapperFilter<Q>. This class also provides the functionality behind CONSTANT_SCORE_FILTER_REWRITE; this is why it is not abstract.

NGramPhraseQuery

This is a PhraseQuery which is optimized for n-gram phrase query. For example, when you query "ABCD" on a 2-gram field, you may want to use NGramPhraseQuery rather than PhraseQuery, because NGramPhraseQuery will Rewrite(IndexReader) the query to "AB/0 CD/2", while PhraseQuery will query "AB/0 BC/1 CD/2" (where term/position).

Collection initializer note: To create and populate a PhraseQuery in a single statement, you can use the following example as a guide:

var phraseQuery = new NGramPhraseQuery(2) {
    new Term("field", "ABCD"), 
    new Term("field", "EFGH")
};

Note that as long as you specify all of the parameters, you can use either Add(Term) or Add(Term, Int32) as the method to use to initialize. If there are multiple parameters, each parameter set must be surrounded by curly braces.

NumericRangeFilter

LUCENENET specific static class to provide access to static methods without referring to the NumericRangeFilter<T>'s generic closing type.

NumericRangeFilter<T>

A Filter that only accepts numeric values within a specified range. To use this, you must first index the numeric values using Int32Field, SingleField, Int64Field or DoubleField (expert: NumericTokenStream).

You create a new NumericRangeFilter with the static factory methods, eg:

Filter f = NumericRangeFilter.NewFloatRange("weight", 0.03f, 0.10f, true, true);

Accepts all documents whose float valued "weight" field ranges from 0.03 to 0.10, inclusive. See NumericRangeQuery for details on how Lucene indexes and searches numeric valued fields.

@since 2.9

NumericRangeQuery

LUCENENET specific class to provide access to static factory metods of NumericRangeQuery<T> without referring to its genereic closing type.

NumericRangeQuery<T>

A Query that matches numeric values within a specified range. To use this, you must first index the numeric values using Int32Field, SingleField, Int64Field or DoubleField (expert: NumericTokenStream). If your terms are instead textual, you should use TermRangeQuery.
NumericRangeFilter is the filter equivalent of this query.

You create a new NumericRangeQuery<T> with the static factory methods, eg:

Query q = NumericRangeQuery.NewFloatRange("weight", 0.03f, 0.10f, true, true);
matches all documents whose valued "weight" field ranges from 0.03 to 0.10, inclusive.

The performance of NumericRangeQuery<T> is much better than the corresponding TermRangeQuery because the number of terms that must be searched is usually far fewer, thanks to trie indexing, described below.

You can optionally specify a Lucene.Net.Search.NumericRangeQuery`1.precisionStep when creating this query. This is necessary if you've changed this configuration from its default (4) during indexing. Lower values consume more disk space but speed up searching. Suitable values are between 1 and 8. A good starting point to test is 4, which is the default value for all Numeric* classes. See below for details.

This query defaults to CONSTANT_SCORE_AUTO_REWRITE_DEFAULT. With precision steps of <=4, this query can be run with one of the BooleanQuery rewrite methods without changing BooleanQuery's default max clause count.

How it works

See the publication about panFMP, where this algorithm was described (referred to as TrieRangeQuery):

Schindler, U, Diepenbroek, M, 2008. Generic XML-based Framework for Metadata Portals. Computers & Geosciences 34 (12), 1947-1955. doi:10.1016/j.cageo.2008.02.023

A quote from this paper: Because Apache Lucene is a full-text search engine and not a conventional database, it cannot handle numerical ranges (e.g., field value is inside user defined bounds, even dates are numerical values). We have developed an extension to Apache Lucene that stores the numerical values in a special string-encoded format with variable precision (all numerical values like s, s, s, and s are converted to lexicographic sortable string representations and stored with different precisions (for a more detailed description of how the values are stored, see NumericUtils). A range is then divided recursively into multiple intervals for searching: The center of the range is searched only with the lowest possible precision in the trie, while the boundaries are matched more exactly. This reduces the number of terms dramatically.

For the variant that stores long values in 8 different precisions (each reduced by 8 bits) that uses a lowest precision of 1 byte, the index contains only a maximum of 256 distinct values in the lowest precision. Overall, a range could consist of a theoretical maximum of

7*255*2 + 255 = 3825
distinct terms (when there is a term for every distinct value of an 8-byte-number in the index and the range covers almost all of them; a maximum of 255 distinct values is used because it would always be possible to reduce the full 256 values to one term with degraded precision). In practice, we have seen up to 300 terms in most cases (index with 500,000 metadata records and a uniform value distribution).

Precision Step

You can choose any Lucene.Net.Search.NumericRangeQuery`1.precisionStep when encoding values. Lower step values mean more precisions and so more terms in index (and index gets larger). The number of indexed terms per value is (those are generated by NumericTokenStream):

indexedTermsPerValue = ceil(bitsPerValue / precisionStep)

As the lower precision terms are shared by many values, the additional terms only slightly grow the term dictionary (approx. 7% for precisionStep=4), but have a larger impact on the postings (the postings file will have more entries, as every document is linked to indexedTermsPerValue terms instead of one). The formula to estimate the growth of the term dictionary in comparison to one term per value:

\mathrm{termDictOverhead} = \sum\limits_{i=0}^{\mathrm{indexedTermsPerValue}-1} \frac{1}{2^{\mathrm{precisionStep}\cdot i}}

On the other hand, if the Lucene.Net.Search.NumericRangeQuery`1.precisionStep is smaller, the maximum number of terms to match reduces, which optimizes query speed. The formula to calculate the maximum number of terms that will be visited while executing the query is:

\mathrm{maxQueryTerms} = \left[ \left( \mathrm{indexedTermsPerValue} - 1 \right) \cdot \left(2^\mathrm{precisionStep} - 1 \right) \cdot 2 \right] + \left( 2^\mathrm{precisionStep} - 1 \right)

For longs stored using a precision step of 4, maxQueryTerms = 15152 + 15 = 465, and for a precision step of 2, maxQueryTerms = 3132 + 3 = 189. But the faster search speed is reduced by more seeking in the term enum of the index. Because of this, the ideal Lucene.Net.Search.NumericRangeQuery`1.precisionStep value can only be found out by testing. Important: You can index with a lower precision step value and test search speed using a multiple of the original step value.

Good values for Lucene.Net.Search.NumericRangeQuery`1.precisionStep are depending on usage and data type:

  • The default for all data types is 4, which is used, when no
    precisionStep
    is given.
  • Ideal value in most cases for 64 bit data types (long, double) is 6 or 8.
  • Ideal value in most cases for 32 bit data types (int, float) is 4.
  • For low cardinality fields larger precision steps are good. If the cardinality is < 100, it is fair to use (see below).
  • Steps >=64 for long/double and >=32 for int/float produces one token per value in the index and querying is as slow as a conventional TermRangeQuery. But it can be used to produce fields, that are solely used for sorting (in this case simply use as Lucene.Net.Search.NumericRangeQuery`1.precisionStep). Using Int32Field, Int64Field, SingleField or DoubleField for sorting is ideal, because building the field cache is much faster than with text-only numbers. These fields have one term per value and therefore also work with term enumeration for building distinct lists (e.g. facets / preselected values to search for). Sorting is also possible with range query optimized fields using one of the above Lucene.Net.Search.NumericRangeQuery`1.precisionSteps.

Comparisons of the different types of RangeQueries on an index with about 500,000 docs showed that TermRangeQuery in boolean rewrite mode (with raised BooleanQuery clause count) took about 30-40 secs to complete, TermRangeQuery in constant score filter rewrite mode took 5 secs and executing this class took <100ms to complete (on an Opteron64 machine, Java 1.5, 8 bit precision step). This query type was developed for a geographic portal, where the performance for e.g. bounding boxes or exact date/time stamps is important.

@since 2.9

PhraseQuery

A Query that matches documents containing a particular sequence of terms. A PhraseQuery is built by QueryParser for input like "new york".

This query may be combined with other terms or queries with a BooleanQuery.

Collection initializer note: To create and populate a PhraseQuery in a single statement, you can use the following example as a guide:

var phraseQuery = new PhraseQuery() {
    new Term("field", "microsoft"), 
    new Term("field", "office")
};

Note that as long as you specify all of the parameters, you can use either Add(Term) or Add(Term, Int32) as the method to use to initialize. If there are multiple parameters, each parameter set must be surrounded by curly braces.

PositiveScoresOnlyCollector

A ICollector implementation which wraps another ICollector and makes sure only documents with scores > 0 are collected.

PrefixFilter

A Filter that restricts search results to values that have a matching prefix in a given field.

PrefixQuery

A Query that matches documents containing terms with a specified prefix. A PrefixQuery is built by QueryParser for input like app*.

This query uses the CONSTANT_SCORE_AUTO_REWRITE_DEFAULT rewrite method.

PrefixTermsEnum

Subclass of FilteredTermsEnum for enumerating all terms that match the specified prefix filter term.

Term enumerations are always ordered by Comparer. Each term in the enumeration is greater than all that precede it.

Query

The abstract base class for queries.

Instantiable subclasses are:

  • TermQuery
  • BooleanQuery
  • WildcardQuery
  • PhraseQuery
  • PrefixQuery
  • MultiPhraseQuery
  • FuzzyQuery
  • RegexpQuery
  • TermRangeQuery
  • NumericRangeQuery
  • ConstantScoreQuery
  • DisjunctionMaxQuery
  • MatchAllDocsQuery

See also the family of Span Queries (Lucene.Net.Search.Spans) and additional queries available in the Queries module

QueryRescorer

A Rescorer that uses a provided Query to assign scores to the first-pass hits.

@lucene.experimental

QueryUtils

Utility class for sanity-checking queries.

QueryUtils.FCInvisibleMultiReader

this is a MultiReader that can be used for randomly wrapping other readers without creating FieldCache insanity. The trick is to use an opaque/fake cache key.

QueryWrapperFilter

Constrains search results to only match those which also match a provided query.

This could be used, for example, with a NumericRangeQuery on a suitably formatted date field to implement date filtering. One could re-use a single CachingWrapperFilter(QueryWrapperFilter) that matches, e.g., only documents modified within the last week. This would only need to be reconstructed once per day.

RandomSimilarityProvider

ReferenceContext<T>

ReferenceContext<T> holds a reference instance and ensures it is properly de-referenced from its corresponding ReferenceManager<G> when Dispose() is called. This class is primarily intended to be used with a using block.

LUCENENET specific

ReferenceManager

LUCENENET specific class used to provide static access to ReferenceManager.IRefreshListener without having to specifiy the generic closing type of ReferenceManager<G>.

ReferenceManager<G>

Utility class to safely share instances of a certain type across multiple threads, while periodically refreshing them. This class ensures each reference is closed only once all threads have finished using it. It is recommended to consult the documentation of ReferenceManager<G> implementations for their MaybeRefresh() semantics.

@lucene.experimental

ReferenceManagerExtensions

RegexpQuery

A fast regular expression query based on the Lucene.Net.Util.Automaton package.

  • Comparisons are fast
  • The term dictionary is enumerated in an intelligent way, to avoid comparisons. See AutomatonQuery for more details.

The supported syntax is documented in the RegExp class. Note this might be different than other regular expression implementations. For some alternatives with different syntax, look under the sandbox.

Note this query can be slow, as it needs to iterate over many terms. In order to prevent extremely slow RegexpQuerys, a RegExp term should not start with the expression .*

@lucene.experimental

Rescorer

Re-scores the topN results (TopDocs) from an original query. See QueryRescorer for an actual implementation. Typically, you run a low-cost first-pass query across the entire index, collecting the top few hundred hits perhaps, and then use this class to mix in a more costly second pass scoring.

See Rescore(IndexSearcher, TopDocs, Query, Double, Int32) for a simple static method to call to rescore using a 2nd pass Query.

@lucene.experimental

ScoreCachingWrappingScorer

A Scorer which wraps another scorer and caches the score of the current document. Successive calls to GetScore() will return the same result and will not invoke the wrapped Scorer's GetScore() method, unless the current document has changed.

This class might be useful due to the changes done to the ICollector interface, in which the score is not computed for a document by default, only if the collector requests it. Some collectors may need to use the score in several places, however all they have in hand is a Scorer object, and might end up computing the score of a document more than once.

ScoreDoc

Holds one hit in TopDocs.

Scorer

Expert: Common scoring functionality for different types of queries.

A Scorer iterates over documents matching a query in increasing order of doc Id.

Document scores are computed using a given Similarity implementation.

NOTE: The values , and are not valid scores. Certain collectors (eg TopScoreDocCollector) will not properly collect hits with these scores.

Scorer.ChildScorer

A child Scorer and its relationship to its parent. The meaning of the relationship depends upon the parent query.

@lucene.experimental

ScoringRewrite<Q>

Base rewrite method that translates each term into a query, and keeps the scores as computed by the query.

@lucene.internal - Only public to be accessible by spans package.

SearchEquivalenceTestBase

Simple base class for checking search equivalence. Extend it, and write tests that create s (all terms are single characters a-z), and use and

SearcherFactory

Factory class used by SearcherManager to create new IndexSearchers. The default implementation just creates an IndexSearcher with no custom behavior:

    public IndexSearcher NewSearcher(IndexReader r)
    {
        return new IndexSearcher(r);
    }

You can pass your own factory instead if you want custom behavior, such as:

  • Setting a custom scoring model: Similarity
  • Parallel per-segment search:
  • Return custom subclasses of IndexSearcher (for example that implement distributed scoring)
  • Run queries to warm your IndexSearcher before it is used. Note: when using near-realtime search you may want to also set MergedSegmentWarmer to warm newly merged segments in the background, outside of the reopen path.
@lucene.experimental

SearcherLifetimeManager

Keeps track of current plus old IndexSearchers, disposing the old ones once they have timed out.

Use it like this:

    SearcherLifetimeManager mgr = new SearcherLifetimeManager();

Per search-request, if it's a "new" search request, then obtain the latest searcher you have (for example, by using SearcherManager), and then record this searcher:

    // Record the current searcher, and save the returend
    // token into user's search results (eg as a  hidden
    // HTML form field):
    long token = mgr.Record(searcher);

When a follow-up search arrives, for example the user clicks next page, drills down/up, etc., take the token that you saved from the previous search and:

    // If possible, obtain the same searcher as the last
    // search:
    IndexSearcher searcher = mgr.Acquire(token);
    if (searcher != null) 
    {
        // Searcher is still here
        try 
        {
            // do searching...
        } 
        finally 
        {
            mgr.Release(searcher);
            // Do not use searcher after this!
            searcher = null;
        }
    } 
    else 
    {
        // Searcher was pruned -- notify user session timed
        // out, or, pull fresh searcher again
    }

Finally, in a separate thread, ideally the same thread that's periodically reopening your searchers, you should periodically prune old searchers:

    mgr.Prune(new PruneByAge(600.0));

NOTE: keeping many searchers around means you'll use more resources (open files, RAM) than a single searcher. However, as long as you are using OpenIfChanged(DirectoryReader), the searchers will usually share almost all segments and the added resource usage is contained. When a large merge has completed, and you reopen, because that is a large change, the new searcher will use higher additional RAM than other searchers; but large merges don't complete very often and it's unlikely you'll hit two of them in your expiration window. Still you should budget plenty of heap in the runtime to have a good safety margin.

SearcherLifetimeManager.PruneByAge

Simple pruner that drops any searcher older by more than the specified seconds, than the newest searcher.

SearcherManager

Utility class to safely share IndexSearcher instances across multiple threads, while periodically reopening. This class ensures each searcher is disposed only once all threads have finished using it.

Use Acquire() to obtain the current searcher, and Release(G) to release it, like this:

IndexSearcher s = manager.Acquire();
try 
{
    // Do searching, doc retrieval, etc. with s
} 
finally 
{
    manager.Release(s);
    // Do not use s after this!
    s = null;
}

In addition you should periodically call MaybeRefresh(). While it's possible to call this just before running each query, this is discouraged since it penalizes the unlucky queries that do the reopen. It's better to use a separate background thread, that periodically calls MaybeRefresh(). Finally, be sure to call Dispose() once you are done.

@lucene.experimental

SetCollector

Just collects document ids into a set.

ShardSearchingTestBase

Base test class for simulating distributed search across multiple shards.

ShardSearchingTestBase.NodeState

ShardSearchingTestBase.NodeState.ShardIndexSearcher

Matches docs in the local shard but scores based on aggregated stats ("mock distributed scoring") from all nodes.

ShardSearchingTestBase.SearcherAndVersion

An IndexSearcher and associated version (lease)

ShardSearchingTestBase.SearcherExpiredException

Thrown when the lease for a searcher has expired.

Sort

Encapsulates sort criteria for returned hits.

The fields used to determine sort order must be carefully chosen. Documents must contain a single term in such a field, and the value of the term should indicate the document's relative position in a given sort order. The field must be indexed, but should not be tokenized, and does not need to be stored (unless you happen to want it back with the rest of your document data). In other words:

document.Add(new Field("byNumber", x.ToString(CultureInfo.InvariantCulture), Field.Store.NO, Field.Index.NOT_ANALYZED));

Valid Types of Values

There are four possible kinds of term values which may be put into sorting fields: s, s, s, or s. Unless SortField objects are specified, the type of value in the field is determined by parsing the first term in the field.

term values should contain only digits and an optional preceding negative sign. Values must be base 10 and in the range and inclusive. Documents which should appear first in the sort should have low value integers, later documents high values (i.e. the documents should be numbered 1..n where 1 is the first and n the last).

term values should contain only digits and an optional preceding negative sign. Values must be base 10 and in the range and inclusive. Documents which should appear first in the sort should have low value integers, later documents high values.

term values should conform to values accepted by (except that NaN and Infinity are not supported). Documents which should appear first in the sort should have low values, later documents high values.

term values can contain any valid , but should not be tokenized. The values are sorted according to their comparable natural order (). Note that using this type of term value has higher memory requirements than the other two types.

Object Reuse

One of these objects can be used multiple times and the sort order changed between usages.

This class is thread safe.

Memory Usage

Sorting uses of caches of term values maintained by the internal HitQueue(s). The cache is static and contains an or array of length IndexReader.MaxDoc for each field name for which a sort is performed. In other words, the size of the cache in bytes is:

4 * IndexReader.MaxDoc * (# of different fields actually used to sort)

For fields, the cache is larger: in addition to the above array, the value of every term in the field is kept in memory. If there are many unique terms in the field, this could be quite large.

Note that the size of the cache is not affected by how many fields are in the index and might be used to sort - only by the ones actually used to sort a result set.

Created: Feb 12, 2004 10:53:57 AM

@since lucene 1.4

SortField

Stores information about how to sort documents by terms in an individual field. Fields must be indexed in order to sort by them.

Created: Feb 11, 2004 1:25:29 PM

@since lucene 1.4

SortRescorer

A Rescorer that re-sorts according to a provided Sort.

SurrogateIndexSearcher

Implements search over a single IndexReader.

Applications usually need only call the inherited Search(Query, Int32) or Search(Query, Filter, Int32) methods. For performance reasons, if your index is unchanging, you should share a single SurrogateIndexSearcher instance across multiple searches instead of creating a new one per-search. If your index has changed and you wish to see the changes reflected in searching, you should use OpenIfChanged(DirectoryReader) to obtain a new reader and then create a new SurrogateIndexSearcher from that. Also, for low-latency turnaround it's best to use a near-real-time reader (Open(IndexWriter, Boolean)). Once you have a new IndexReader, it's relatively cheap to create a new SurrogateIndexSearcher from it.

NOTE: SurrogateIndexSearcher instances are completely thread safe, meaning multiple threads can call any of its methods, concurrently. If your application requires external synchronization, you should not synchronize on the SurrogateIndexSearcher instance; use your own (non-Lucene) objects instead.

SurrogateIndexSearcher.LeafSlice

A class holding a subset of the SurrogateIndexSearchers leaf contexts to be executed within a single thread.

@lucene.experimental

TermCollectingRewrite<Q>

TermQuery

A Query that matches documents containing a term. this may be combined with other terms with a BooleanQuery.

TermRangeFilter

A Filter that restricts search results to a range of term values in a given field.

This filter matches the documents looking for terms that fall into the supplied range according to , It is not intended for numerical ranges; use NumericRangeFilter instead.

If you construct a large number of range filters with different ranges but on the same field, FieldCacheRangeFilter may have significantly better performance.

@since 2.9

TermRangeQuery

A Query that matches documents within an range of terms.

This query matches the documents looking for terms that fall into the supplied range according to . It is not intended for numerical ranges; use NumericRangeQuery instead.

This query uses the CONSTANT_SCORE_AUTO_REWRITE_DEFAULT rewrite method.

@since 2.9

TermRangeTermsEnum

Subclass of FilteredTermsEnum for enumerating all terms that match the specified range parameters.

Term enumerations are always ordered by Comparer. Each term in the enumeration is greater than all that precede it.

TermStatistics

Contains statistics for a specific term

@lucene.experimental

TimeLimitingCollector

The TimeLimitingCollector is used to timeout search requests that take longer than the maximum allowed search time limit. After this time is exceeded, the search thread is stopped by throwing a TimeLimitingCollector.TimeExceededException.

TimeLimitingCollector.TimeExceededException

Thrown when elapsed search time exceeds allowed search time.

TimeLimitingCollector.TimerThread

Thread used to timeout search requests. Can be stopped completely with StopTimer()

@lucene.experimental

TopDocs

Represents hits returned by Search(Query, Filter, Int32) and Search(Query, Int32).

TopDocsCollector<T>

A base class for all collectors that return a TopDocs output. This collector allows easy extension by providing a single constructor which accepts a PriorityQueue<T> as well as protected members for that priority queue and a counter of the number of total hits.

Extending classes can override any of the methods to provide their own implementation, as well as avoid the use of the priority queue entirely by passing null to TopDocsCollector(PriorityQueue<T>). In that case however, you might want to consider overriding all methods, in order to avoid a .

TopFieldCollector

A ICollector that sorts by SortField using FieldComparers.

See the Create(Sort, Int32, Boolean, Boolean, Boolean, Boolean) method for instantiating a TopFieldCollector.

@lucene.experimental

TopFieldDocs

Represents hits returned by Search(Query, Filter, Int32, Sort).

TopScoreDocCollector

A ICollector implementation that collects the top-scoring hits, returning them as a TopDocs. this is used by IndexSearcher to implement TopDocs-based search. Hits are sorted by score descending and then (when the scores are tied) docID ascending. When you create an instance of this collector you should know in advance whether documents are going to be collected in doc Id order or not.

NOTE: The values and are not valid scores. This collector will not properly collect hits with such scores.

TopTermsRewrite<Q>

Base rewrite method for collecting only the top terms via a priority queue.

@lucene.internal - Only public to be accessible by spans package.

TotalHitCountCollector

Just counts the total number of hits.

Weight

Expert: Calculate query weights and build query scorers.

The purpose of Weight is to ensure searching does not modify a Query, so that a Query instance can be reused.

IndexSearcher dependent state of the query should reside in the Weight.

AtomicReader dependent state should reside in the Scorer.

Since Weight creates Scorer instances for a given AtomicReaderContext (GetScorer(AtomicReaderContext, IBits)) callers must maintain the relationship between the searcher's top-level IndexReaderContext and the context used to create a Scorer.

A Weight is used in the following way:

  1. A Weight is constructed by a top-level query, given a IndexSearcher (CreateWeight(IndexSearcher)).
  2. The GetValueForNormalization() method is called on the Weight to compute the query normalization factor QueryNorm(Single) of the query clauses contained in the query.
  3. The query normalization factor is passed to Normalize(Single, Single). At this point the weighting is complete.
  4. A Scorer is constructed by GetScorer(AtomicReaderContext, IBits).

@since 2.9

WildcardQuery

Implements the wildcard search query. Supported wildcards are , which matches any character sequence (including the empty one), and ?, which matches any single character. '&apos; is the escape character.

Note this query can be slow, as it needs to iterate over many terms. In order to prevent extremely slow WildcardQueries, a Wildcard term should not start with the wildcard

This query uses the CONSTANT_SCORE_AUTO_REWRITE_DEFAULT rewrite method.

Interfaces

FieldCache.IByteParser

Interface to parse bytes from document fields.

FieldCache.IDoubleParser

Interface to parse s from document fields.

FieldCache.IInt16Parser

Interface to parse s from document fields.

NOTE: This was ShortParser in Lucene

FieldCache.IInt32Parser

Interface to parse s from document fields.

NOTE: This was IntParser in Lucene

FieldCache.IInt64Parser

Interface to parse from document fields.

NOTE: This was LongParser in Lucene

FieldCache.IParser

Marker interface as super-interface to all parsers. It is used to specify a custom parser to SortField(String, FieldCache.IParser).

FieldCache.ISingleParser

Interface to parse s from document fields.

NOTE: This was FloatParser in Lucene

FuzzyTermsEnum.ILevenshteinAutomataAttribute

Reuses compiled automata across different segments, because they are independent of the index

@lucene.internal

IBoostAttribute

Add this IAttribute to a TermsEnum returned by GetTermsEnum(Terms, AttributeSource) and update the boost on each returned term. This enables to control the boost factor for each matching term in SCORING_BOOLEAN_QUERY_REWRITE or TopTermsRewrite<Q> mode. FuzzyQuery is using this to take the edit distance into account.

Please note: this attribute is intended to be added only by the TermsEnum to itself in its constructor and consumed by the MultiTermQuery.RewriteMethod.

@lucene.internal

ICollector

Expert: Collectors are primarily meant to be used to gather raw results from a search, and implement sorting or custom result filtering, collation, etc.

Lucene's core collectors are derived from Collector. Likely your application can use one of these classes, or subclass TopDocsCollector<T>, instead of implementing ICollector directly:

  • TopDocsCollector<T> is an abstract base class that assumes you will retrieve the top N docs, according to some criteria, after collection is done.
  • TopScoreDocCollector is a concrete subclass TopDocsCollector<T> and sorts according to score + docID. This is used internally by the IndexSearcher search methods that do not take an explicit Sort. It is likely the most frequently used collector.
  • TopFieldCollector subclasses TopDocsCollector<T> and sorts according to a specified Sort object (sort by field). This is used internally by the IndexSearcher search methods that take an explicit Sort.
  • TimeLimitingCollector, which wraps any other Collector and aborts the search if it's taken too much time.
  • PositiveScoresOnlyCollector wraps any other ICollector and prevents collection of hits whose score is <= 0.0

ICollector decouples the score from the collected doc: the score computation is skipped entirely if it's not needed. Collectors that do need the score should implement the SetScorer(Scorer) method, to hold onto the passed Scorer instance, and call GetScore() within the collect method to compute the current hit's score. If your collector may request the score for a single hit multiple times, you should use ScoreCachingWrappingScorer.

NOTE: The doc that is passed to the collect method is relative to the current reader. If your collector needs to resolve this to the docID space of the Multi*Reader, you must re-base it by recording the docBase from the most recent SetNextReader(AtomicReaderContext) call. Here's a simple example showing how to collect docIDs into an OpenBitSet:

[Serializable] private class MySearchCollector : ICollector
{
    private readonly OpenBitSet bits;
    private int docBase;

    public MySearchCollector(OpenBitSet bits)
    {
        if (bits == null) throw new ArgumentNullException("bits");
        this.bits = bits;
    }

    // ignore scorer
    public void SetScorer(Scorer scorer)
    { 
    }

    // accept docs out of order (for a BitSet it doesn't matter)
    public bool AcceptDocsOutOfOrder
    {
        get { return true; }
    }

    public void Collect(int doc)
    {
        bits.Set(doc + docBase);
    }

    public void SetNextReader(AtomicReaderContext context)
    {
        this.docBase = context.DocBase;
    }
}

IndexSearcher searcher = new IndexSearcher(indexReader);
OpenBitSet bits = new OpenBitSet(indexReader.MaxDoc);
searcher.Search(query, new MySearchCollector(bits));

Not all collectors will need to rebase the docID. For example, a collector that simply counts the total number of hits would skip it.

NOTE: Prior to 2.9, Lucene silently filtered out hits with score <= 0. As of 2.9, the core ICollectors no longer do that. It's very unusual to have such hits (a negative query boost, or function query returning negative custom scores, could cause it to happen). If you need that behavior, use PositiveScoresOnlyCollector.

@lucene.experimental

@since 2.9

IFieldCache

Expert: Maintains caches of term values.

Created: May 19, 2004 11:13:14 AM

@lucene.internal

@since lucene 1.4

IMaxNonCompetitiveBoostAttribute

Add this IAttribute to a fresh AttributeSource before calling GetTermsEnum(Terms, AttributeSource). FuzzyQuery is using this to control its internal behaviour to only return competitive terms.

Please note: this attribute is intended to be added by the MultiTermQuery.RewriteMethod to an empty AttributeSource that is shared for all segments during query rewrite. This attribute source is passed to all segment enums on GetTermsEnum(Terms, AttributeSource). TopTermsRewrite<Q> uses this attribute to inform all enums about the current boost, that is not competitive.

@lucene.internal

ITopDocsCollector

LUCENENET specific interface used to reference TopDocsCollector<T> without referencing its generic type.

ReferenceManager.IRefreshListener

Use to receive notification when a refresh has finished. See AddListener(ReferenceManager.IRefreshListener).

SearcherLifetimeManager.IPruner

See Prune(SearcherLifetimeManager.IPruner).

Enums

Occur

Specifies how clauses are to occur in matching documents.

SortFieldType

Specifies the type of the terms to be sorted, or special types such as CUSTOM

Back to top Copyright © 2017 Alachisoft