Agent Inference
Validated on 9 Oct 2024 • Last edited on 26 Mar 2026
DigitalOcean Gradient™ AI Agentic Cloud allows you to create multi-agent workflows to power your AI applications. This allows developers to integrate agents into your AI applications.
Note: The Agent Inference API uses a customer-specific base URL (the agent endpoint)
and is independent of the main DigitalOcean control-plane API (https://api.digitalocean.com).
https://{your-agent-url}
POST Create a model response for the given chat conversation
/api/v1/chat/completions
Authorizations:
bearer_auth
OAuth Authentication
In order to interact with the DigitalOcean API, you or your application must authenticate.
The DigitalOcean API handles this through OAuth, an open standard for authorization. OAuth allows you to delegate access to your account. Scopes can be used to grant full access, read-only access, or access to a specific set of endpoints.
You can generate an OAuth token by visiting the Apps & API section of the DigitalOcean control panel for your account.
An OAuth token functions as a complete authentication request. In effect, it acts as a substitute for a username and password pair.
Because of this, it is absolutely essential that you keep your OAuth tokens secure. In fact, upon generation, the web interface will only display each token a single time in order to prevent the token from being compromised.
DigitalOcean access tokens begin with an identifiable prefix in order to distinguish them from other similar tokens.
dop_v1_for personal access tokens generated in the control paneldoo_v1_for tokens generated by applications using the OAuth flowdor_v1_for OAuth refresh tokens
Scopes
Scopes act like permissions assigned to an API token. These permissions determine what actions the token can perform. You can create API tokens that grant read-only access, full access, or limited access to specific endpoints by using custom scopes.
Generally, scopes are designed to match HTTP verbs and common CRUD operations (Create, Read, Update, Delete).
| HTTP Verb | CRUD Operation | Scope |
|---|---|---|
| GET | Read | <resource>:read |
| POST | Create | <resource>:create |
| PUT/PATCH | Update | <resource>:update |
| DELETE | Delete | <resource>:delete |
For example, creating a new Droplet by making a POST request to the
/v2/droplets endpoint requires the droplet:create scope while
listing Droplets by making a GET request to the /v2/droplets
endpoint requires the droplet:read scope.
Each endpoint below specifies which scope is required to access it when using custom scopes.
How to Authenticate with OAuth
In order to make an authenticated request, include a bearer-type
Authorization header containing your OAuth token. All requests must be
made over HTTPS.
Authenticate with a Bearer Authorization Header
curl -X $HTTP_METHOD -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" "https://api.digitalocean.com/v2/$OBJECT"
Creates a model response for the given chat conversation via a customer-provisioned agent endpoint.
Query Parameters
agent
required
trueMust be set to true for agent-based completion behavior.
Default:trueRequest Body: application/json
frequency_penalty
optional Nullable
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
logit_bias
optional Nullable
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
Show child properties
(additional properties)
optional
Additional properties are allowed.
logprobs
optional Nullable
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
max_completion_tokens
optional Nullable
The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run.
max_tokens
optional Nullable
The maximum number of tokens that can be generated in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
messages
required
A list of messages comprising the conversation so far.
Show child properties
content
optional Nullable
Hello, how are you?The contents of the message.
reasoning_content
optional Nullable
The reasoning content generated by the model (assistant messages only).
refusal
optional Nullable
The refusal message generated by the model (assistant messages only).
role
required
userThe role of the message author.
tool_call_id
optional
call_abc123Tool call that this message is responding to (tool messages only).
tool_calls
optional
The tool calls generated by the model (assistant messages only).
Show child properties
function
required
Show child properties
arguments
required
{"location": "Boston"}The arguments to call the function with, as generated by the model in JSON format.
name
required
get_weatherThe name of the function to call.
id
required
call_abc123The ID of the tool call.
type
required
functionThe type of the tool. Currently, only function is supported.
metadata
optional Nullable
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Show child properties
(additional properties)
optional
Additional properties are allowed.
model
required
llama3-8b-instructModel ID used to generate the response.
n
optional Nullable
1How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
presence_penalty
optional Nullable
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
reasoning_effort
optional Nullable
Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
seed
optional Nullable
If specified, the system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
stop
optional
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
stream
optional Nullable
If set to true, the model response data will be streamed to the client as it is generated using server-sent events.
stream_options
optional Nullable
Options for streaming response. Only set this when you set stream to true.
Show child properties
include_usage
optional
If set, an additional chunk will be streamed before the data [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array.
temperature
optional Nullable
1What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
tool_choice
optional
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.
Option 1
function
function
required
Show child properties
name
required
get_weatherThe name of the function to call.
type
required
functiontools
optional
A list of tools the model may call. Currently, only functions are supported as a tool.
Show child properties
function
required
Show child properties
description
optional
Get the current weather for a location.A description of what the function does, used by the model to choose when and how to call the function.
name
required
get_weatherThe name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
parameters
optional
The parameters the function accepts, described as a JSON Schema object.
Show child properties
(additional properties)
optional
Additional properties are allowed.
type
required
functionThe type of the tool. Currently, only function is supported.
top_logprobs
optional Nullable
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
top_p
optional Nullable
1An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
user
optional
user-1234A unique identifier representing your end-user, which can help DigitalOcean to monitor and detect abuse.
Request: /api/v1/chat/completions
{
"frequency_penalty": 0,
"logprobs": false,
"max_completion_tokens": 0,
"max_tokens": 0,
"messages": [
{
"content": "Hello, how are you?",
"reasoning_content": "string",
"refusal": "string",
"role": "user",
"tool_call_id": "call_abc123",
"tool_calls": []
}
],
"model": "llama3-8b-instruct",
"n": 1,
"presence_penalty": 0,
"reasoning_effort": "none",
"seed": 0,
"stop": "string",
"stream": false,
"stream_options": {
"include_usage": true
},
"temperature": 1,
"tool_choice": "none",
"tools": [
{
"type": "function"
}
],
"top_logprobs": 0,
"top_p": 1,
"user": "user-1234"
}curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AGENT_ACCESS_KEY" \
-d '{"messages": [{"role": "user", "content": "What is the capital of Portugal?"}], "model": "ignored"}' \
"https://$AGENT_URL/api/v1/chat/completions?agent=true"Responses
200
Successful chat completion. When stream is true, response is sent as Server-Sent Events (text/event-stream); otherwise a single JSON object (application/json) is returned.
ratelimit-limit
The default limit on number of requests that can be made per hour and per minute. Current rate limits are 5000 requests per hour and 250 requests per minute.
ratelimit-remaining
The number of requests in your hourly quota that remain before you hit your request limit. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
ratelimit-reset
The time when the oldest request will expire. The value is given in Unix epoch time. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
application/json
choices
required
A list of chat completion choices. Can be more than one if n is greater than 1.
Show child properties
finish_reason
required
stopThe reason the model stopped generating tokens. stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, tool_calls if the model called a tool.
index
required
0The index of the choice in the list of choices.
logprobs
required Nullable
Log probability information for the choice.
Show child properties
content
required Nullable
A list of message content tokens with log probability information.
Show child properties
bytes
required Nullable
A list of integers representing the UTF-8 bytes representation of the token. Can be null if there is no bytes representation for the token.
logprob
required
-1.2345The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
token
required
HelloThe token.
top_logprobs
required
List of the most likely tokens and their log probability, at this token position.
refusal
required Nullable
A list of message refusal tokens with log probability information.
Show child properties
bytes
required Nullable
A list of integers representing the UTF-8 bytes representation of the token. Can be null if there is no bytes representation for the token.
logprob
required
-1.2345The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
token
required
HelloThe token.
top_logprobs
required
List of the most likely tokens and their log probability, at this token position.
message
required
A chat completion message generated by the model.
Show child properties
content
required Nullable
Hello! How can I help you today?The contents of the message.
reasoning_content
required Nullable
The reasoning content generated by the model.
refusal
required Nullable
The refusal message generated by the model.
role
required
assistantThe role of the author of this message.
tool_calls
optional
The tool calls generated by the model, such as function calls.
Show child properties
function
required
id
required
call_abc123The ID of the tool call.
type
required
functionThe type of the tool.
created
required
1677649420The Unix timestamp (in seconds) of when the chat completion was created.
id
required
chatcmpl-abc123A unique identifier for the chat completion.
model
required
llama3-8b-instructThe model used for the chat completion.
object
required
chat.completionThe object type, which is always chat.completion.
usage
optional
Usage statistics for the completion request.
Show child properties
cache_created_input_tokens
required
0Number of prompt tokens written to cache.
cache_creation
required
Breakdown of prompt tokens written to cache.
Show child properties
ephemeral_1h_input_tokens
required
0Number of prompt tokens written to 1h cache.
ephemeral_5m_input_tokens
required
0Number of prompt tokens written to 5m cache.
cache_read_input_tokens
required
0Number of prompt tokens read from cache.
completion_tokens
required
20Number of tokens in the generated completion.
prompt_tokens
required
10Number of tokens in the prompt.
total_tokens
required
30Total number of tokens used in the request (prompt + completion).
401
Authentication failed due to invalid credentials.
ratelimit-limit
The default limit on number of requests that can be made per hour and per minute. Current rate limits are 5000 requests per hour and 250 requests per minute.
ratelimit-remaining
The number of requests in your hourly quota that remain before you hit your request limit. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
ratelimit-reset
The time when the oldest request will expire. The value is given in Unix epoch time. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
application/json
id
required
not_foundA short identifier corresponding to the HTTP status code returned. For example, the ID for a response returning a 404 status code would be "not_found."
message
required
The resource you were accessing could not be found.A message providing additional information about the error, including details to help resolve it when possible.
request_id
optional
4d9d8375-3c56-4925-a3e7-eb137fed17e9Optionally, some endpoints may include a request ID that should be provided when reporting bugs or opening support tickets to help identify the issue.
429
The API rate limit has been exceeded.
ratelimit-limit
The default limit on number of requests that can be made per hour and per minute. Current rate limits are 5000 requests per hour and 250 requests per minute.
ratelimit-remaining
The number of requests in your hourly quota that remain before you hit your request limit. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
ratelimit-reset
The time when the oldest request will expire. The value is given in Unix epoch time. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
application/json
id
required
not_foundA short identifier corresponding to the HTTP status code returned. For example, the ID for a response returning a 404 status code would be "not_found."
message
required
The resource you were accessing could not be found.A message providing additional information about the error, including details to help resolve it when possible.
request_id
optional
4d9d8375-3c56-4925-a3e7-eb137fed17e9Optionally, some endpoints may include a request ID that should be provided when reporting bugs or opening support tickets to help identify the issue.
500
There was a server error.
ratelimit-limit
The default limit on number of requests that can be made per hour and per minute. Current rate limits are 5000 requests per hour and 250 requests per minute.
ratelimit-remaining
The number of requests in your hourly quota that remain before you hit your request limit. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
ratelimit-reset
The time when the oldest request will expire. The value is given in Unix epoch time. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
application/json
id
required
not_foundA short identifier corresponding to the HTTP status code returned. For example, the ID for a response returning a 404 status code would be "not_found."
message
required
The resource you were accessing could not be found.A message providing additional information about the error, including details to help resolve it when possible.
request_id
optional
4d9d8375-3c56-4925-a3e7-eb137fed17e9Optionally, some endpoints may include a request ID that should be provided when reporting bugs or opening support tickets to help identify the issue.
default
There was an unexpected error.
ratelimit-limit
The default limit on number of requests that can be made per hour and per minute. Current rate limits are 5000 requests per hour and 250 requests per minute.
ratelimit-remaining
The number of requests in your hourly quota that remain before you hit your request limit. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
ratelimit-reset
The time when the oldest request will expire. The value is given in Unix epoch time. See https://docs.digitalocean.com/reference/api/reference/#rate-limit for information about how requests expire.
application/json
id
required
not_foundA short identifier corresponding to the HTTP status code returned. For example, the ID for a response returning a 404 status code would be "not_found."
message
required
The resource you were accessing could not be found.A message providing additional information about the error, including details to help resolve it when possible.
request_id
optional
4d9d8375-3c56-4925-a3e7-eb137fed17e9Optionally, some endpoints may include a request ID that should be provided when reporting bugs or opening support tickets to help identify the issue.
Response
{
"choices": [
{
"finish_reason": "stop",
"index": 0
}
],
"created": 1677649420,
"id": "chatcmpl-abc123",
"model": "llama3-8b-instruct",
"object": "chat.completion",
"usage": {
"cache_created_input_tokens": 0,
"cache_creation": {
"ephemeral_1h_input_tokens": 0,
"ephemeral_5m_input_tokens": 0
},
"cache_read_input_tokens": 0,
"completion_tokens": 20,
"prompt_tokens": 10,
"total_tokens": 30
}
}{
"id": "unauthorized",
"message": "Unable to authenticate you."
}{
"id": "too_many_requests",
"message": "API rate limit exceeded."
}{
"id": "server_error",
"message": "Unexpected server-side error"
}{
"id": "example_error",
"message": "some error message"
}