Close
Toolcog home
toolcog
openai
Search
⌘
+
K
Command Palette
Search for a command to run...
Search Toolcog
Sign In
API
Operations
Auth Schemes
Overrides
Intents
Targets
API
Operations
Auth Schemes
Overrides
Intents
Targets
Operations
API endpoints available to AI agents
All methods
All methods
GET
POST
DELETE
All statuses
All statuses
Active
Deprecated
Cancel an ongoing evaluation run.
cancelEvalRun
POST/evals/{eval_id}/runs/{run_id}
Evals
Create the structure of an evaluation that can be used to test a model's performance. An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the [Evals guide](/docs/guides/evals).
createEval
POST/evals
Evals
Create a new evaluation run. This is the endpoint that will kick off grading.
createEvalRun
POST/evals/{eval_id}/runs
Evals
Delete an evaluation.
deleteEval
DELETE/evals/{eval_id}
Evals
Delete an eval run.
deleteEvalRun
DELETE/evals/{eval_id}/runs/{run_id}
Evals
Get an evaluation by ID.
getEval
GET/evals/{eval_id}
Evals
Get an evaluation run by ID.
getEvalRun
GET/evals/{eval_id}/runs/{run_id}
Evals
Get an evaluation run output item by ID.
getEvalRunOutputItem
GET/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}
Evals
Get a list of output items for an evaluation run.
getEvalRunOutputItems
GET/evals/{eval_id}/runs/{run_id}/output_items
Evals
Get a list of runs for an evaluation.
getEvalRuns
GET/evals/{eval_id}/runs
Evals
List evaluations for a project.
listEvals
GET/evals
Evals
Update certain properties of an evaluation.
updateEval
POST/evals/{eval_id}
Evals