How to Use Claude Code and Other Agentic Workflows on DigitalOcean
Validated on 27 Apr 2026 • Last edited on 27 Apr 2026
Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.
To use Claude Code or similar agentic workflows on DigitalOcean, you can use the following configuration. We offer full compatibility with Anthropic’s tool-use schema through our unified endpoint. Setting the ANTHROPIC_BASE_URL to the DigitalOcean inference endpoint https://inference.do-ai.run/v1/messages lets you bypass vendor lock-in.
curl https://inference.do-ai.run/v1/messages \
--header "x-api-key: $MODEL_ACCESS_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "content-type: application/json" \
--data '{
"model": "anthropic-claude-4.6-sonnet",
"max_tokens": 4096,
"tools": [
{
"name": "read_file",
"description": "Read the contents of a file from the local filesystem.",
"input_schema": {
"type": "object",
"properties": { "path": { "type": "string" } },
"required": ["path"]
}
}
],
"messages": [
{"role": "user", "content": "Refactor the authentication logic in src/auth.ts."}
]
}'