Messages
Last updated
Was this helpful?
Last updated
Was this helpful?
Model
The model that will complete your prompt.
Messages
Input messages. (JSON Array)
Refer to "How to make Messages?" below this page.
Max Tokens
The maximum number of tokens to generate before stopping.
Metadata
An object describing metadata about the request. (Optional)
Refer to "What is Metadata?" below this page.
Stop Sequences
Custom text sequences that will cause the model to stop generating. (Optional)
Refer to "What is Stop Sequences?" below this page.
System
System prompt. (Optional)
Temperature
Amount of randomness injected into the response. (Optional)
Defaults to 1 Required Range 0 ~ 1
Tools
Definitions of tools that the model may use. (Optional)
Refer to "What is Tools?" below this page.
Tools Choice
How the model should use the provided tools. (Optional)
Refer to "What is Tools Choice?" below this page.
Top K
Only sample from the top K options for each subsequent token. (Optional)
Required Range : x > 0
Top P
Use nucleus sampling. (Optional)
Recommended for advanced use cases only. You usually only need to use temperature
.
Required Range : 0 ~ 1
The message content is structured as an array of message objects, and each message follows the format below.
content
Message Content
role
Message Role
user: Represents the message or question sent by the user to the AI—this is the actual input the AI should respond to.
assistant: Represents the AI model’s response to the user’s input.
An object describing metadata about the request.
user_id
An external identifier for the user who is associated with the request.
This should be a uuid, hash value, or other opaque identifier.
Anthropic may use this id to help detect abuse. Do not include any identifying information such as name, email address, or phone number.
Maximum Length : 256
Custom text sequences that will cause the model to stop generating.
Definitions of tools that the model may use.
If you include tools
in your API request, the model may return tool_use
content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result
content blocks.
name
Name of the tool.
description
Optional, but strongly-recommended description of the tool.
input_schema
How the model should use the provided tools. The model can use a specific tool, any available tool, decide by itself, or not use tools at all.
auto
Automatically selected by the model based on context
any
Freely select from the provided tools
tool
Force the use of a specific tool only
content[]
Content generated by the model.
content[].text
Response Text Message
content[].type
Response Message Type (text)
id
Unique object identifier.
model
The model that handled the request.
role
Conversational role of the generated message.
This will always be "assistant"
.
stop_reason
The reason that we stopped.
This may be one the following values:
"end_turn"
: the model reached a natural stopping point
"max_tokens"
: we exceeded the requested max_tokens
or the model's maximum
"stop_sequence"
: one of your provided custom stop_sequences
was generated
"tool_use"
: the model invoked one or more tools
stop_sequence
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
type
Object type.
For Messages, this is always "message"
.
usage
Billing and rate-limit usage.
usage.input_tokens
Number of tokens used when requesting the message
usage.output_tokens
Number of tokens used when responsed the message
See for additional details and options.
Different models have different maximum values for this parameter. See for details.
for the tool input
shape that the model will produce in tool_use
output content blocks.