Chat Completions
Last updated
Was this helpful?
Last updated
Was this helpful?
Model
The name of the model that will complete your prompt.
Messages
A list of messages comprising the conversation so far.
Refer to "How to make Messages?" section below this page.
Max Tokens
The maximum number of completion tokens returned by the API. (Optional)
Frequency penalty
Decreases likelihood of repetition based on prior frequency. (Optional)
Default : 1
Presence penalty
Positive values increase the likelihood of discussing new topics. (Optional)
Allow Range : 0 ~ 2 Default : 0
Temperature
The amount of randomness in the response. (Optional)
Allow Range : 0 ~ 2 Default : 0.2
Top K
The number of tokens to keep for top-k filtering. Limits the model to consider only the k most likely next tokens at each step. (Optional)
Default : 0
Top P
The nucleus sampling threshold (Optional)
Allow Range : 0 ~ 1 Default : 0.9
The message content is structured as an array of messages, and the format is as follows:
content
The contents of the message in this turn of conversation.
role
The role of the speaker in this conversation.
system: Provides instructions that define the overall behavior and response style of the AI model. It sets the context for the conversation and establishes the AI's role.
user: Represents the message or question sent by the user to the AI. This is the actual input the AI should respond to.
assistant: Represents the response generated by the AI model. This is the AI's reply to the user's input.
id
Response ID
model
Model
object
Object Type : chat.completion Fixed
created
Created datetime (timestamp)
citations[]
List of citation links for the generated response
choices[]
List of response choices based on the prompt request
choices[].index
Position of the response choice
choices[].finish_reason
Reason for stopping text generation
stop: Generation completed
length: Exceeded max length
choices[].message
Message Object
choices[].message.role
Message Role : system, user, assistant
choices[].message.content
Message Content
choices[].delta
Returned object during streaming response
choices[].delta.role
Role of streaming message : system, user, assistant
choices[].delta.content
Streaming Content
usage
Token usage information
usage.promp_tokens
Number of tokens in the request message
usage.completion_tokens
Number of tokens in the response message
usage.total_tokens
Total number of tokens
Refer to to find all the models offered.