How to Build, Test, and Deploy Agents on DigitalOcean Gradient™ AI Platform Using Agent Development Kitpublic
Validated on 9 Dec 2025 • Last edited on 17 Dec 2025
DigitalOcean Gradient™ AI Platform lets you build fully-managed AI agents with knowledge bases for retrieval-augmented generation, multi-agent routing, guardrails, and more, or use serverless inference to make direct requests to popular foundation models.
You can build, test, and deploy agent workflows from within your development framework using the Agent Development Kit (ADK). You can also add knowledge bases to your agent using the knowledge bases endpoint to give the agent access to custom data, view logs and traces, and run agent evaluations.
If you want to use the DigitalOcean Control Panel, CLI, or API instead, see How to Create Agents on DigitalOcean Gradient™ AI Platform.
Prerequisites
You must have the following to use the Agent Development Kit:
-
Python version 3.13
-
Dependencies listed in
requirements.txtat the root of the folder or repo to deploy. -
.envfile with environment variables to use in agent deployment. -
Model access key for authentication. Set the key in the
GRADIENT_MODEL_ACCESS_KEYenvironment variable and add it to your.envfile. For running your agent locally, you must export the key in your terminal for it to be accessible to the application. -
Your account’s personal access token. The key must have all CRUD scopes for
genaiandreadscope forproject. Set the API key in theDIGITALOCEAN_API_TOKENenvironment variable and add it to your.envfile to enable deploying the agent to your DigitalOcean account.
About Entrypoint
Your agent code must have an entrypoint function that starts with the @entrypoint decorator. The entrypoint tells the Agent Development Kit runtime how to host your agent code and is called when you invoke your agent.
The entrypoint function requires two parameters:
-
payloadis the first parameter for the payload. -
contextis the second parameter for the context that may get sent such astrace_ids.
The function can look similar to the following:
@entrypoint
def entry(payload, context):
query = payload["prompt"]
inputs = {"messages": [HumanMessage(content=query)]}
result = workflow.invoke(inputs)
return resultThe content of the payload is determined by the agent. In this example, the agent requires the payload in the JSON body of the POST request to contain a prompt field.
Install Agent Development Kit
To start building an agent using the Agent Development Kit, you must first install the gradient-adk package using pip:
pip install gradient-adk
Installing the gradient-adk package automatically gives you access to the gradient CLI.
To view the version of the installed package, run:
gradient --version
Set Up a Project
You can either use a project for an existing agent or initialize a new project to build, test, and deploy your agent.
If you have an existing agent, you can bring it on the Gradient AI Platform using the Agent Development Kit.
First, navigate to that agent folder and review the requirements.txt to verify that the Agent Development Kit is installed. The requirements.txt must have the gradient-adk and gradient lines listed as dependencies.
Then, import the entrypoint module from the Agent Development Kit by adding from gradient_adk import entrypoint in your agent code. This module lets you create an @entrypoint decorator and enables you to add an entrypoint function in your agent code. For example, in an existing LangGraph agent code, you can add the following import statement at the top of your main.py file:
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage, BaseMessage
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode, tools_condition
from gradient_adk import entrypointFinally, write your entrypoint function in the agent code. For more information about the entrypoint decorator, see entrypoint decorator.
Next, run the following command to create a Gradient configuration file:
gradient agent configure
The Gradient configuration file is required to run or deploy your agent. When prompted, enter the agent name, agent deployment name (such as production, staging, or beta), and the file your entrypoint lives in. For example, example-agent, staging, and main.py (if your agent code is in main.py), respectively. You see a Configuration complete message once the configuration completes. Next, run the agent locally.
You can initialize a new project for your agent. Navigate to the desired folder for your agent and run the following command:
gradient agent init
To provide an easy way for you to get started, the command creates folders and files (requirements.txt), sets up a base template for a simple LangGraph example agent that makes a call to a openai-gpt-oss-120b model using serverless inference(main.py), and sets up a Gradient configuration file which is required to run or deploy your agent.
When prompted, specify an agent workspace name and an agent deployment name. For example, staging, and example-agent, respectively.
After the project initialization is complete, your directory structure looks like the following:
Next, update main.py to implement your agent and update the .env file with your GRADIENT_MODEL_ACCESS_KEY and DIGITALOCEAN_API_TOKEN. Then, run the agent locally.
Run and Test Agents Locally
To run an agent, use the following command:
gradient agent run
This starts up a local server on localhost:8080 and exposes an /run endpoint that you can use to interact with your agent.
You see the following output:
Entrypoint: main.py
Server: http://0.0.0.0:8080
Agent: example_agent
Entrypoint endpoint: http://0.0.0.0:8080/run
To invoke the agent, send a POST request to the /run endpoint using curl. For example:
curl -X POST http://localhost:8080/run
-H "Content-Type: application/json"
-d '{"prompt": "How are you?"}'Your agent processes the request and returns a response, such as Hello! I am doing good, thank you for asking. How can I assist you today?.
To view more verbose debugging logs, use:
gradient agent run --verbose
Once you verify that your agent is working correctly, you can deploy it.
Deploy and Test Your Agent
Use the following command to deploy your agent:
gradient agent deploy
This starts the build and deployment, which takes between 1 minute and 5 minutes. If your agent fails to build or deploy, see Troubleshoot Build or Deployment Failures.
After the deployment completes, you can see the deployment endpoint that the agent is running in your terminal. It includes the workspace identifier (b1689852-xxxx-xxxx-xxxx-xxxxxxxxxxxx) and deployment name (staging). For example:
✅ Deployment completed successfully! [01:20]
Agent deployed successfully! (example-agent/staging)
To invoke your deployed agent, send a POST request to https://agents.do-ai.run/b1689852-xxxx-xxxx-8c68-dce069403e97/v1/staging/run with your properly formatted payload.To invoke your deployed agent and verify that it is running correctly, send a POST request to the deployment endpoint, passing the prompt in the request JSON body. For example:
curl -X POST \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
"https://agents.do-ai.run/v1/b1689852-xxxx-xxxx-8c68-dce069403e97/staging/run" \
-d '{"prompt": "hello"}'The agent processes your request and returns a response, such as "Hello! How can I assist you today?.
Deploying the agent also creates a new workspace in the DigitalOcean Control Panel. The workspace is named the workspace name you specified previously and labeled Managed by ADK. Here you can view and perform actions on agent deployments and run evaluations.
Troubleshoot Build or Deployment Failures
Builds or deployments can fail if you have any of the following issues:
-
Python version other than 3.13.
-
Missing
requirements.txtfile. -
The agent does not expose port
8080/run. This likely means you have not defined an entrypoint correctly as the agent must pass a health check to finish deploying. -
Incorrect scope permissions for the DigitalOcean access token.
-
Missing environment variables required by your agent in the
.envfile.
Check the Python version, the requirements.txt file, the entrypoint function defined, and all required environment variables in the .env file. Then, try building or deploying the agent again.
View Traces and Logs
If you have previously deployed your agent, your agent automatically capture traces locally. LangGraph agents capture the intermediate input and outputs of the nodes while other agent frameworks capture the input and output to the agent itself. You can view these using:
gradient agent tracesYou can view the agent’s logs using:
gradient agent logsYou can also view the logs and traces in the control panel.
View Agent Deployments in the DigitalOcean Control Panel
Agent deployments are organized in workspaces labeled Managed by ADK. These workspaces group agent deployments by development environments, such as production, staging, or test. However, you cannot move agent deployments from one workspace to another. To use the agent in another workspace, you must redeploy it to that workspace with the environment defined.
To view agent deployments, in the left menu of the control panel, click Agent Platform. In the Workspaces tab, click + to expand the workspace that has your agent deployment. Then, select an agent deployment to open its Overview page.
You can perform the following actions for the agent deployment:
-
View agent insights and logs for the deployment in the Observability tab. See View Agent Insights and Logs for more information.
-
View the current and past agent deployments in the Releases tab. The release information includes the deployment timestamps and statuses.
-
Create test cases, run evaluations, and view preview evaluation runs. See Run Evaluations on Agent Deployments for more information.
-
Destroy the agent deployment. See Destroy an Agent Deployment for more information.
Run Evaluations on Agent Deployments
You can create test cases and run evaluations on agent deployments that have deployed successfully at least once. The evaluation test cases belong to the ADK workspace and you can use them for any agent deployments within the workspace.
ADK agent evaluations use judge input and output tokens. These tokens are used by the third-party LLM-as-judge to score the agent behavior against the metrics defined in the test case. These costs are waived during public preview.
First, create an evaluation dataset. The evaluation datasets for agent deployments are similar to the evaluation datasets you use for agents built using the DigitalOcean Control Panel, CLI, or API, except the following differences:
-
You must provide the full JSON payload in the
querycolumn. The string values must be properly escaped. -
You can use multi-field queries.
-
For the
expected_responsecolumn in a ground truth dataset, you can provide either a properly-escaped JSON payload or a string.
Then, run an evaluation:
gradient agent evaluateWhen prompted, enter the following information:
-
Path to the dataset CSV file
-
Evaluation run name
-
Metric categories
-
Star metric and threshold
Once an evaluation run finishes, you can view the top-level results in the terminal. Click a link to open the agent’s Evaluations tab in the control panel and view the detailed results.
Alternatively, you can create an evaluation dataset and run an evaluation in the control panel.
To review how the agent responded to each prompt, click an evaluation run in the control panel and then scroll down the page to the Scores tab to view all scores for the entire trace.
Agent deployments also have a trace view where you can see the individual spans (decisions/stopping points) during the agent’s journey from input to output. Locate the prompt you want to review details for, select the Queries tab, and then click Query details for that prompt. Click on each span to see the scores specific to that span. Only certain metrics are associated with certain spans. For example:
-
Input span shows the input the agent received along with any scores associated with the input. The scores shown depend on the metrics you selected - only some scores relate to the input.
-
LLM span shows any scores associated with the LLM decision making at this point prior to any retrieval or tool calls.
-
Tools called span provides scores for tool-call specific metrics, as well as which tool was called and what happened during that tool call.
-
Knowledge base span shows what data was retrieved from which knowledge base, and scores related to each retrieved source, if relevant.
-
Output span shows the agent output and any relevant metrics scores to the output.
View Agent Deployments in the DigitalOcean Control Panel
Agent deployments are organized in workspaces labeled Managed by ADK. These workspaces group agent deployments by development environments, such as production, staging, or test. However, you cannot move agent deployments from one workspace to another. To use the agent in another workspace, you must redeploy it to that workspace with the environment defined.
To view agent deployments, in the left menu of the control panel, click Agent Platform. In the Workspaces tab, click + to expand the workspace that has your agent deployment. Then, select an agent deployment to open it’s Overview page.
You can perform the following actions for the agent deployment:
-
View agent insights and logs for the deployment in the Observability tab. See View Agent Insights and Logs for more information.
-
View the current and past agent deployments in the Releases tab. The release information include the deployment timestamps and statuses.
- Destroy the agent deployment. See Destroy an Agent Deployment for more information.
Destroy an Agent Deployment
You can destroy an agent deployment only using the DigitalOcean Control Panel. To destroy an agent deployment from the control panel, in the left menu, click Agent Platform. From the Workspaces tab, select the workspace that contains the agent you want to destroy and select the agent. Then, select Destroy agent deployment from the agent’s Actions menu. In the Destroy Agent Deployment window, type the agent’s name to confirm and then click Destroy.
Once all agent deployments within the workspace are destroyed, the workspace is also destroyed.