Perform chat completion inference on the service
/_inference/chat_completion/{inference_id}/_stream The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
It only works with the chat_completion task type.
NOTE: The chat_completion task type is only available within the _stream API and only supports streaming.
The Chat completion inference API and the Stream inference API differ in their response structure and capabilities.
The Chat completion inference API provides more comprehensive customization options through more fields and function calling support.
To determine whether a given inference service supports this task type, please see the page for that service.
Parameters
path Path Parameters
| Name | Type |
|---|---|
inference_id
required
The inference Id | type TypesId = string |
query Query Parameters
| Name | Type |
|---|---|
timeout Specifies the amount of time to wait for the inference request to complete. | type TypesDuration = string | "-1" | "0" |
Request Body
messages:
model?: string;
max_completion_tokens?: number;
stop?: string[];
temperature?: number;
tool_choice?:
tools?:
top_p?: number;
}