Get tokens from text analysis
/{index}/_analyze The analyze API performs analysis on a text string and returns the resulting tokens.
Generating excessive amount of tokens may cause a node to run out of memory.
The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced.
If more than this limit of tokens gets generated, an error occurs.
The _analyze endpoint without a specified index will always use 10000 as its limit.
Required authorization
- Index privileges:
index
Parameters
path Path Parameters
| Name | Type |
|---|---|
index
required
Index used to derive the analyzer.
If specified, the | type TypesIndexName = string |
query Query Parameters
| Name | Type |
|---|---|
index Index used to derive the analyzer.
If specified, the | type TypesIndexName = string |
Request Body
analyzer?: string;
attributes?: string[];
char_filter?:
explain?: boolean;
field?:
filter?:
normalizer?: string;
text?:
tokenizer?:
}