Get tokens from text analysis
POST
/_analyze The analyze API performs analysis on a text string and returns the resulting tokens.
Generating excessive amount of tokens may cause a node to run out of memory.
The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced.
If more than this limit of tokens gets generated, an error occurs.
The _analyze endpoint without a specified index will always use 10000 as its limit.
Required authorization
- Index privileges:
index
Parameters
query Query Parameters
| Name | Type |
|---|---|
index Index used to derive the analyzer.
If specified, the | type TypesIndexName = string |
Request Body
application/json
required
{
analyzer?: string;
attributes?: string[];
char_filter?:TypesAnalysisCharFilter [];
explain?: boolean;
field?:TypesField ;
filter?:TypesAnalysisTokenFilter [];
normalizer?: string;
text?:IndicesAnalyzeTextToAnalyze ;
tokenizer?:TypesAnalysisTokenizer ;
}
analyzer?: string;
attributes?: string[];
char_filter?:
explain?: boolean;
field?:
filter?:
normalizer?: string;
text?:
tokenizer?:
}
Responses
200 application/json
{ detail?: IndicesAnalyzeAnalyzeDetail ;tokens?: IndicesAnalyzeAnalyzeToken []; }