Text Generation (Extended)
Last updated
Was this helpful?
Last updated
Was this helpful?
Model
The name of the Model
to use for generating the completion.
Contents
The content of the current conversation with the model.
Refer to "How to make Contents?" section below this page.
System Instruction
The content of the current conversation with the model. (Optional)
Stop Sequence
Specifies the set of character sequences (up to 5) that will stop output generation. (Optional)
Refer to "What is Stop Sequence?" section below this page.
Max Output Tokens
Sets the maximum number of tokens to include in a candidate. (Optional)
Temperature
Controls the randomness of the output. Use higher values for more creative responses, and lower values for more deterministic responses. (Optional)
Allow Ranges : 0.0 ~ 2.0
Top K
Changes how the model selects tokens for output. (Optional)
Upper to 1
Top P
Changes how the model selects tokens for output. (Optional)
Allow Ranges : 0 ~ 1
The message content is structured as an array of messages, with the following format:
role
Roles of Message
user
model
parts[]
Message Item Array
parts[].text
Text Content
[
{
"role": "user", // User
"parts": [
{
"text": "Hello there!" // User input
},
{
"text": "I'm chris!" // Multiple text parts can be included,
} // and they are merged into a single string upon execution.
] // In this example, the prompt becomes "안녕하세요 아웃코드입니다"
},
{
"role": "model", // AI Model
"parts": [
{
"text": "Hello chris, what can I help you?"
}
]
}
]
Stops message generation immediately when a specific keyword is generated.
["result", "response", "stop"] // Input as an array of strings
// A maximum of 3 keywords can be set
Returns the text response generated from the prompt request.