Search + K

Command Palette

Search for a command to run...

Sign In

Perform chat completion inference on the service

POST /_inference/chat_completion/{inference_id}/_stream
Copy endpoint

The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the chat_completion task type.

NOTE: The chat_completion task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. To determine whether a given inference service supports this task type, please see the page for that service.

Parameters

path Path Parameters

Name Type
inference_id required

The inference Id

type TypesId = string

query Query Parameters

Name Type
timeout

Specifies the amount of time to wait for the inference request to complete.

type TypesDuration = string | "-1" | "0"

Request Body

application/json required
interface InferenceTypesRequestChatCompletion {
messages: InferenceTypesMessage

An object representing part of the conversation.

interface InferenceTypesMessage {
content?: InferenceTypesMessageContent;
role: string;
tool_call_id?: TypesId;
tool_calls?: InferenceTypesToolCall[];
}
[]
;
model?: string;
max_completion_tokens?: number;
stop?: string[];
temperature?: number;
tool_choice?: InferenceTypesCompletionToolType
type InferenceTypesCompletionToolType = InferenceTypesCompletionToolChoice | string
;
tools?: InferenceTypesCompletionTool

A list of tools that the model can call.

interface InferenceTypesCompletionTool {
type: string;
function: InferenceTypesCompletionToolFunction;
}
[]
;
top_p?: number;
}

Responses

200 application/json
interface TypesStreamResult {}