Chat completions

User Interface

Field
Description
Comment

Model

Model name for the model to use.

Messages

A list of messages that make up the the chat conversation.

Refer to "How to make Messages?" section below this page.

Deferred

If set to true, the request returns a request_id.

Default : false

Frequency penalty

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. (Optional)

Number between -2.0 and 2.0. Default : 0

Logic Bias

A JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. (Optional)

Refer to "What is Logic Bias?" section below this page.

Logprobs

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. (Optional)

Default : false

Max Completion Tokens

An upper bound for the number of tokens that can be generated for a completion, including both visible output tokens and reasoning tokens. (Optional)

N

How many chat completion choices to generate for each input message. (Optional)

Default : 1 Min : 1

Precense penalty

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. (Optional)

Allow Ranges : -2.0 ~ 2.0

Default : 0

Response Format

An object specifying the format that the model must output. (Optional)

None Text JSON Object JSON Schema

JSON Schema

Setting to receive the response as a desired JSON object. (Optional)

Required when specifying the data return type as JSON Schema. Refer to "What is JSON Schema?" section below this page.

Seed

If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. (Optional)

Integer or null

Stop

Up to 4 sequences where the API will stop generating further tokens. (Optional)

Use commas (,) to separate multiple settings Example: stop,finish

Temperature

What sampling temperature to use. (Optional)

Allow Range : 0 ~ 2 Default : 1

Tools

A list of tools the model may call in JSON-schema. (Optional)

Refer to "What is Tools?" section below this page.

Tool Choice

Controls which (if any) tool is called by the model.

Refer to "What is Tool Choice?" section below this page.

Top Logprobs

An integer between 0 and 8 specifying the number of most likely tokens to return at each token position, each with an associated log probability (Optional)

Top P

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. (Optional)

Allow Range : 0 ~ 1 Default : 1

User

A unique identifier representing your end-user, which can help xAI to monitor and detect abuse. (Optional)

How to make Messages?

A list of messages that make up the the chat conversation. Different models support different message types, such as image and text.

Key
Description
Comment

content

Prompt Content

name

A unique identifier representing your end-user, which can help xAI to monitor and detect abuse. (Optional)

role

Role of message

  • user: Represents the message or question sent by the user to the AI. This is the actual input the AI should respond to.

  • assistant: Represents the response generated by the AI model. This is the AI's reply to the user's input.

  • system: Information provided by the AI model to the developer.

Example:

[
  { 
    "role": "user",              
    "content": "Hello"
  },
  {
    "role": "assistant",         
    "content": "Nice to meet you. How can I help you?"
  },
  {
    "role": "user", 
    "content": "Could you explain automation workflows in English?"
  }
]

What is Logic Bias?

A feature that allows you to forcibly adjust the occurrence probability of specific words (or tokens).

Example:

// To increase the likelihood of the word "Cat" appearing more frequently
// Token ID for "Cat" is 9240
{
  "9240": 50
}
// Value range: -100 to 100
// Positive values increase the repetition of the word
// Negative values suppress or block the word

What is JSON Schema?

A setting that emphasizes the model to respond in a specific JSON format.

Example:

{
  "type": "json_schema",
  "json_schema": {
    "type": "object",
    "properties": {  // Define the structure and data types of the returned data
      "temperature": { "type": "number" },
      "condition": { "type": "string" },
      "humidity": { "type": "integer" },
      "city": { "type": "string" }
    },
    "required": ["temperature", "condition", "city"] // Values that must be included in the response
  }
}


// Example response based on the specified JSON Schema:

{
  "temperature": 12.5,
  "condition": "Clear",
  "humidity": 60,
  "city": "Seoul"
}

What is Tools?

A feature that allows the model to call specific functions. This means the model can directly generate JSON to invoke APIs or interact with external systems.

Example:

// Example tool that allows the model to fetch the current weather of a specific city

[
  {
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Fetches the current weather for a specific city.",
      "parameters": {
        "type": "object",
        "properties": {  // Defines the tool's properties and data types
          "city": { "type": "string", "description": "Name of the city to get weather for" },
          "unit": { "type": "string", "enum": ["metric", "imperial"], "description": "Temperature unit (metric = Celsius, imperial = Fahrenheit)" }
        },
        "required": ["city"]  // Specifies required parameter(s)
      }
    }
  }
]

// Example output when the conversation is generated
{
  "model": "grok-2-latest",
  "messages": [
    { "role": "system", "content": "You are an AI that provides weather information." },
    { "role": "user", "content": "Tell me the weather in Seoul." },
    {
      "role": "tool",  // Returns the result of the defined tool call
      "tool_call_id": "123456",
      "name": "get_weather",
      "arguments": {
        "city": "Seoul",
        "unit": "metric"
      }
    },
    {
      "role": "assistant",
      "content": "The current weather in Seoul is clear, with a temperature of 12°C."
    }
  ]
}

What is Tool Choice?

An option that determines how the model uses tools.

Type Options

Key
Description

auto

The model can pick between generating a message or calling one or more tools. (default)

none

The model will not call any tool and instead generates a message.


Response Datas

Key
Description

id

Unique object identifier.

model

Model name that handled the request.

created

생성일시 타임스탬프

request_id

Returned when deferred option is selected. if not set, this value will not be returned.

content[]

Response message content.

content[].type

Content Type

content[].text

Text response from the model.

content[].signature

Signature of the content

content[].thinking

Thinking content

content[].id

Tool call ID.

content[].input

Input to the tool call follwing the input_schema.

content[].name

Name of the tool call to be used.

stop_reason

Reason to stop.

stop_sequence

Custom stop sequence used to stop the generation.

usage

Token usage information.

usage.cache_read_input_tokens

Number of tokens retrieved from the cache for this request.

usage.input_tokens

Number of input tokens used

usage.output_tokens

Number of output tokens used

Last updated

Was this helpful?