Anthropic’s surface centers on a few clearly related resource types: Models, Messages (single), Message Batches (bulk/async), Files, and Skills (with Skill Versions). Understand those relationships and the few cross-cutting differences (stable vs beta endpoints, pagination tokens, and workspace-scoped operations) and you can handle the common user requests cleanly.
How the domain is organized
-
Models are read-only descriptors you choose when generating text. Use model IDs (aliases) from
models.list/models.getto pick the model for completions or messages. -
Messages vs Message Batches:
messages.postsends a single synchronous message/turn and returns the response in one call.message_batchesis a separate workflow for sending many messages/inputs as a single batch. Creating a batch returns a batch object; results for each item are retrieved from a dedicatedresultsendpoint. Batches are cancelable and deletable as a unit.
-
Files are storeable artifacts (uploads, metadata, downloads). Upload to create a
file_id, then reference thatfile_idfrom other operations that accept file artifacts (skill versions, exports, etc.). File operations includeupload_file,get_file_metadata,download_file, anddelete_file. -
Skills are first-class objects you create and manage. Each Skill can have multiple Skill Versions. Versions are immutable snapshots identified by a version identifier (not a semantic name) — the version identifier is a Unix epoch timestamp string. You create a Skill, create one or more Skill Versions, list versions, and delete specific versions or the whole Skill.
Typical entry points
Start with these calls to get the IDs you will need for other operations:
-
To pick a model: call
models.list(ormodels.getif you already have an alias) to obtain amodel_idor confirm model capabilities. -
To manage Skills: call
list_skillsto findskill_idvalues. After you have askill_id, calllist_skill_versionsto see available version IDs (the version strings are Unix epoch timestamps). To create a new deployable, callcreate_skillthencreate_skill_versionfor thatskill_id. -
To work with Files: call
upload_fileto produce afile_id. Uselist_filesorget_file_metadatato discover existingfile_ids. Usedownload_fileto fetch content anddelete_fileto remove it. -
To run many messages at once: create a batch with
message_batches_postand usemessage_batches_resultsto obtain per-item outcomes. You cancancela running batch ordeleteit after you're done. -
For single-turn usage:
messages.postorcomplete.postare the straightforward entry points for user requests that ask for a generated reply.
Common user tasks and the required steps
- Create and use a Skill (typical user flow)
- Create the Skill with
create_skill(returnsskill_id). - If you need artifacts (code, prompts, data), upload them with
upload_fileand keep the returnedfile_ids. - Create a Skill Version with
create_skill_versionfor theskill_id. Provide anyfile_idreferences the version requires. The response contains the version identifier (an epoch timestamp string). - Call
get_skill_versionto inspect a version orlist_skill_versionsto enumerate them. - Delete a specific version with
delete_skill_versionby passing the exact version identifier. Delete the whole Skill withdelete_skill.
- Run a single message or completion
- Choose a
model_idfrommodels.list/models.get. - Call
messages.post(orcomplete.postfor completion-style requests) with the chosenmodel_idand message content. - To estimate cost or input size beforehand, use
messages.count_tokenswith the same message structure.
- Run large/parallel workloads with Message Batches
- Create the batch with
message_batches_postsupplying an array of inputs. - The batch object indicates state; fetch the batch with
message_batches_retrievefor metadata and status. - Use
message_batches_resultsto obtain per-item results (this is the endpoint that returns the actual outputs for each message in the batch). - If needed,
message_batches_cancelwill attempt to stop processing;message_batches_deleteremoves batch metadata.
- Upload and use files
- Call
upload_fileto create files (returnsfile_idand metadata). - Use
get_file_metadataorlist_filesto locatefile_ids for reuse. download_filereturns raw content (string) for the file.delete_fileremoves the file and metadata.
Pagination and page-token nuances
- Two pagination styles appear across endpoints: cursor-style (
before_id/after_id) and token-style (page/next_page).- Use
before_id/after_idfor endpoints that return those cursors (files, message batches list, models list in some variants). - Use
page/next_pagefor skill lists and skill-version lists where the response provides a page token.
- Use
- Default limits vary (commonly 20) and some endpoints accept a
limitup to 1000. Read the list response to know which token to pass next.
Beta vs stable endpoints
- There are both beta-prefixed endpoints and stable counterparts. They mirror functionality but responses and available request fields can differ. When you need the latest beta capabilities, call the beta path; otherwise prefer the stable endpoint names.
- Many endpoints expose an
anthropic-versionoranthropic-betaheader to select behavior. When a specific behavior or field is required by the caller (e.g., new beta fields in requests or responses), include the corresponding version/beta header.
Authentication and workspace-scoped operations (what to watch for)
- Some resources are workspace-scoped and require the workspace API key header. File operations, message-batch deletes/results, and certain model/info endpoints commonly show an
x-api-keyheader in their interface. If a call returns an authorization error or the response indicates workspace scope, include the workspacex-api-keyheader in subsequent calls.
Non-obvious, important gotchas
-
Skill Version IDs are not human-friendly names — they are Unix epoch timestamps serialized as strings. When you create a version you must use the exact timestamp string to reference or delete it.
-
Creating a batch and retrieving its results are separate steps: the batch creation response does not contain per-item outputs. Always call the
resultsendpoint for the outputs of individual messages in the batch. -
The
download_fileendpoints return raw content as a string (not an object). Expect to find the file payload directly in the response body. -
Token counting and the actual send call use the same messaging structure. Use
messages.count_tokensto preview token usage with the exact message payload you will send tomessages.postormessage_batches_post. -
Pagination token names differ by resource. Don’t assume
pageworks everywhere—inspect the list response to see whether it returnednext_page(token-style) orafter_id/before_id(cursor-style) and pass the same token name back. -
Beta vs stable responses can differ in shape and field names. If a client or caller expects a particular field, confirm which endpoint version (beta vs stable) provides it.
Quick decision checklist
- Need a model? Call
models.list/models.getto obtainmodel_id. - Need to upload or reference artifacts? Call
upload_file→ use returnedfile_idin later calls. - Need to run a single prompt? Use
messages.postorcomplete.postwith an appropriatemodel_id. - Need many parallel prompts?
message_batches_post→ pollmessage_batches_results→ optionallycancelordelete. - Need to create an automated extension?
create_skill→create_skill_version(versions are epoch timestamps).
Use these patterns as the operational map: models supply compute targets; files supply artifacts; skills + skill versions are deployable logic/artifact bundles; messages and message batches execute work. Following the entry points above will get you the IDs required by downstream calls and avoid the common pagination/versioning pitfalls.