Close
Toolcog home
toolcog
openai
Search
⌘
+
K
Command Palette
Search for a command to run...
Search Toolcog
Sign In
API
Operations
Auth Schemes
Overrides
Intents
Targets
API
Operations
Auth Schemes
Overrides
Intents
Targets
Operations
API endpoints available to AI agents
All methods
All methods
GET
POST
DELETE
Cancel an ongoing evaluation run.
Authenticated
cancelEvalRun
POST/evals/{eval_id}/runs/{run_id}
Evals
Create the structure of an evaluation that can be used to test a model's performance. An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the [Evals guide](/docs/guides/evals).
Authenticated
createEval
POST/evals
Evals
Create a new evaluation run. This is the endpoint that will kick off grading.
Authenticated
createEvalRun
POST/evals/{eval_id}/runs
Evals
Delete an evaluation.
Authenticated
deleteEval
DELETE/evals/{eval_id}
Evals
Delete an eval run.
Authenticated
deleteEvalRun
DELETE/evals/{eval_id}/runs/{run_id}
Evals
Get an evaluation by ID.
Authenticated
getEval
GET/evals/{eval_id}
Evals
Get an evaluation run by ID.
Authenticated
getEvalRun
GET/evals/{eval_id}/runs/{run_id}
Evals
Get an evaluation run output item by ID.
Authenticated
getEvalRunOutputItem
GET/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}
Evals
Get a list of output items for an evaluation run.
Authenticated
getEvalRunOutputItems
GET/evals/{eval_id}/runs/{run_id}/output_items
Evals
Get a list of runs for an evaluation.
Authenticated
getEvalRuns
GET/evals/{eval_id}/runs
Evals
List evaluations for a project.
Authenticated
listEvals
GET/evals
Evals
Update certain properties of an evaluation.
Authenticated
updateEval
POST/evals/{eval_id}
Evals