Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Headers
The version of the API you want to use.
Body
The model to use for the request.
"claude-sonnet-4-20250514"
Input messages.
Example with a single user
message:
[{"role": "user", "content": "Hello, Claude"}]
Example with multiple conversational turns:
[
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"},
{"role": "user", "content": "Can you explain LLMs in plain English?"},
]
Example with a partially-filled response from Claude:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("},
]
Each input message content
may be either a single string
or an array of content blocks, where each block has a specific type
. Using a string
for content
is shorthand for an array of one content block of type "text"
. The following input messages are equivalent:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
See input examples.
Note that if you want to include a system prompt, you can use the top-level system
parameter — there is no "system"
role for input messages in the Messages API.
There is a limit of 100,000 messages in a single request.
The maximum number of tokens to generate before stopping.
Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
Different models have different maximum values for this parameter. See models for details.
x >= 1
1024
An object describing metadata about the request.
Determines whether to use priority capacity (if available) or standard capacity for this request.
Anthropic offers different levels of service for your API requests. See service-tiers for details.
auto
, standard_only
Custom text sequences that will cause the model to stop generating.
Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason
of "end_turn"
.
If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences
parameter. If the model encounters one of the custom sequences, the response stop_reason
value will be "stop_sequence"
and the response stop_sequence
value will contain the matched stop sequence.
Whether to incrementally stream the response using server-sent events.
See streaming for details.
System prompt.
A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our guide to system prompts.
[
{
"text": "Today's date is 2024-06-01.",
"type": "text"
}
]
"Today's date is 2023-01-01."
Amount of randomness injected into the response.
Defaults to 1.0
. Ranges from 0.0
to 1.0
. Use temperature
closer to 0.0
for analytical / multiple choice, and closer to 1.0
for creative and generative tasks.
Note that even with temperature
of 0.0
, the results will not be fully deterministic.
0 <= x <= 1
1
Configuration for enabling Claude's extended thinking.
When enabled, responses include thinking
content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens
limit.
See extended thinking for details.
How the model should use the provided tools. The model can use a specific tool, any available tool, decide by itself, or not use tools at all. The model will automatically decide whether to use tools.
Definitions of tools that the model may use.
If you include tools
in your API request, the model may return tool_use
content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result
content blocks.
There are two types of tools: client tools and server tools. The behavior described below applies to client tools. For server tools, see their individual documentation as each has its own behavior (e.g., the web search tool).
Each tool definition includes:
name
: Name of the tool.description
: Optional, but strongly-recommended description of the tool.input_schema
: JSON schema for the toolinput
shape that the model will produce intool_use
output content blocks.
For example, if you defined tools
as:
[
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]
And then asked the model "What's the S&P 500 at today?", the model might produce tool_use
content blocks in the response like this:
[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
]
You might then run your get_stock_price
tool with {"ticker": "^GSPC"}
as an input, and return the following back to the model in a subsequent user
message:
[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
]
Tools can be used for workflows that include running client-side tools and functions, or more generally whenever you want the model to produce a particular JSON structure of output.
See our guide for more details.
{
"description": "Get the current weather in a given location",
"input_schema": {
"properties": {
"location": {
"description": "The city and state, e.g. San Francisco, CA",
"type": "string"
},
"unit": {
"description": "Unit for the output - one of (celsius, fahrenheit)",
"type": "string"
}
},
"required": ["location"],
"type": "object"
},
"name": "get_weather"
}
Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature
.
x >= 0
5
Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p
. You should either alter temperature
or top_p
, but not both.
Recommended for advanced use cases only. You usually only need to use temperature
.
0 <= x <= 1
0.7
Response
Message object.
Unique object identifier.
The format and length of IDs may change over time.
"msg_013Zva2CMHLNnXjNJJKqJ2EF"
Object type.
For Messages, this is always "message"
.
Title | Const |
---|---|
Type | message |
Conversational role of the generated message.
This will always be "assistant"
.
Title | Const |
---|---|
Role | assistant |
Content generated by the model.
This is an array of content blocks, each of which has a type
that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages
ended with an assistant
turn, then the response content
will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages
were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content
might be:
[{"type": "text", "text": "B)"}]
[
{
"text": "Hi! My name is Claude.",
"type": "text"
}
]
The model to use for the request.
"claude-sonnet-4-20250514"
The reason that we stopped.
This may be one the following values:
"end_turn"
: the model reached a natural stopping point"max_tokens"
: we exceeded the requestedmax_tokens
or the model's maximum"stop_sequence"
: one of your provided customstop_sequences
was generated"tool_use"
: the model invoked one or more tools"pause_turn"
: we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal"
: when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start
event and non-null otherwise.
end_turn
, max_tokens
, stop_sequence
, tool_use
, pause_turn
, refusal
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
Billing and rate-limit usage.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage
will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens
will be non-zero, even for an empty string response from Claude.
Total input tokens in a request is the summation of input_tokens
, cache_creation_input_tokens
, and cache_read_input_tokens
.