A list of available agent evaluation metrics and their definitions.
DigitalOcean Gradient™ AI Platform Reference
Validated on 17 Jan 2025 • Last edited on 23 Mar 2026
DigitalOcean Gradient™ AI Platform lets you build fully-managed AI agents with knowledge bases for retrieval-augmented generation, multi-agent routing, guardrails, and more, or use serverless inference to make direct requests to popular foundation models.
The DigitalOcean API
The DigitalOcean API lets you manage resources programmatically with standard HTTP requests. All actions available in the control panel are also available through the API.
You can use the API to create, delete, and manage knowledge bases and generative AI agents . You can also use the API to add agent and function routes to agents, add data sources to knowledge bases, and start indexing jobs.
You can use the Dedicated Inference API to manage your dedicated inference deployments. Dedicated Inference is available in public preview. You can opt in from the Feature Preview page. For more information, see Dedicated Inference API.
The DigitalOcean Command Line Client, doctl
doctl is the command-line interface for the DigitalOcean API. It supports most of the same actions available in the API and DigitalOcean Control Panel.
doctl supports managing Gradient AI Platform resources from the command line. See the doctl documentation or use doctl gradient --help for more information.
The Gradient Command Line Interface, gradient public
Use gradient, the CLI which comes with the Agent Development Kit, to build, test, and deploy agent workflows from within your development environments.
The DigitalOcean Gradient™ AI Platform SDK
Use the official DigitalOcean Gradient™ AI Platform SDK to manage Gradient AI Platform resources, including knowledge bases and generative AI agents, from Python applications.
More Resources
Reference for chunking parameters, their recommendations, and their constraints across supported embedding models.
Understand the information agent tracing captures and how it helps you debug and optimize your agents.
The DigitalOcean Gradient™ AI Platform API endpoints are organized into the following groups:
- GradientAI Platform (87 endpoints): The API lets you build GPU-powered AI agents with pre-built or custom foundation models, function and agent routes, and RAG pipelines with knowledge bases.
- Dedicated Inference (13 endpoints): Dedicated Inference delivers scalable production-grade LLM hosting on DigitalOcean. Create, list, get, update, and delete Dedicated Inference instances; manage accelerators, CA certificate, sizes, GPU model config, and access tokens.
- Agent Inference (1 endpoints): DigitalOcean Gradient™ AI Agentic Cloud allows you to create multi-agent workflows to power your AI applications. This allows developers to integrate agents into your AI applications.
- Serverless Inference (5 endpoints): DigitalOcean Gradient™ AI Agentic Cloud allows access to serverless inference models. You can access models by providing an inference key.