Search + K

Command Palette

Search for a command to run...

Sign In

Get tokens from text analysis

GET /{index}/_analyze
Copy endpoint

The analyze API performs analysis on a text string and returns the resulting tokens.

Generating excessive amount of tokens may cause a node to run out of memory. The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The _analyze endpoint without a specified index will always use 10000 as its limit.

Required authorization

  • Index privileges: index

Parameters

path Path Parameters

Name Type
index required

Index used to derive the analyzer. If specified, the analyzer or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.

type TypesIndexName = string

query Query Parameters

Name Type
index

Index used to derive the analyzer. If specified, the analyzer or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.

type TypesIndexName = string

Request Body

application/json required
{
analyzer?: string;
attributes?: string[];
char_filter?: TypesAnalysisCharFilter
type TypesAnalysisCharFilter = TypesAnalysisCharFilterDefinition | string
[]
;
explain?: boolean;
field?: TypesField

Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

type TypesField = string
;
filter?: TypesAnalysisTokenFilter
type TypesAnalysisTokenFilter = TypesAnalysisTokenFilterDefinition | string
[]
;
normalizer?: string;
text?: IndicesAnalyzeTextToAnalyze
type IndicesAnalyzeTextToAnalyze = string[] | string
;
tokenizer?: TypesAnalysisTokenizer
type TypesAnalysisTokenizer = TypesAnalysisTokenizerDefinition | string
;
}

Responses

200 application/json
{ detail?: IndicesAnalyzeAnalyzeDetail
interface IndicesAnalyzeAnalyzeDetail {
analyzer?: IndicesAnalyzeAnalyzerDetail;
charfilters?: IndicesAnalyzeCharFilterDetail[];
custom_analyzer: boolean;
tokenfilters?: IndicesAnalyzeTokenDetail[];
tokenizer?: IndicesAnalyzeTokenDetail;
}
;tokens?: IndicesAnalyzeAnalyzeToken
interface IndicesAnalyzeAnalyzeToken {
end_offset: number;
position: number;
positionLength?: number;
start_offset: number;
token: string;
type: string;
}
[]
; }