How to Use Coding Agents With DigitalOcean

Validated on 27 Apr 2026 • Last edited on 27 Apr 2026

Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.

Coding agents, such as Codex and Claude Code, can use the inference endpoints https://inference.do-ai.run as a drop-in proxy to run inference requests on DigitalOcean.

Prerequisites

  1. Create a model access key. Keys use the sk-do-... format.

  2. Export the key in your shell profile as MODEL_ACCESS_KEY so that it persists across sessions:

    echo 'export MODEL_ACCESS_KEY="sk-do-..."' >> ~/.zshrc
    source ~/.zshrc

    Use ~/.bashrc or ~/.config/fish/config.fish instead of ~/.zshrc if you use another shell.

  3. Retrieve the model IDs for available models from the Model Catalog:

    curl -s \
      -H "Authorization: Bearer $MODEL_ACCESS_KEY" \
      https://inference.do-ai.run/v1/models \
      | jq '.data[].id'

Set Up Coding Agents

The following sections describe agent-specific setup.

Claude Code is Anthropic’s agentic coding tool. It communicates using the Anthropic Messages API.

Install Claude Code

To install on macOS, Linux, or WSL:

curl -fsSL https://claude.ai/install.sh | bash

You can also use npm:

npm install -g @anthropic-ai/claude-code

See Claude Code overview for installation instructions.

Verify the installation:

claude --version

Connect Claude Code to DigitalOcean

Connect to Claude Code using one of these options.

Write ~/.claude/settings.json:

cat << 'EOF' > ~/.claude/settings.json
{
  "$schema": "https://json.schemastore.org/claude-code-settings.json",
  "apiKeyHelper": "printenv MODEL_ACCESS_KEY",
  "env": {
    "ANTHROPIC_BASE_URL": "https://inference.do-ai.run",
    "CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS": "1"
  },
  "model": "sonnet",
  "availableModels": [
    "haiku",
    "sonnet",
    "opus"
  ],
  "forceLoginMethod": "console"
}
EOF
{
  "$schema": "https://json.schemastore.org/claude-code-settings.json",
  "apiKeyHelper": "printf %s\"$MODEL_ACCESS_KEY\"",
  "env": {
    "ANTHROPIC_BASE_URL": "https://inference.do-ai.run",
    "CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS": "1"
  },
  "model": "sonnet",
  "availableModels": [
    "haiku",
    "sonnet",
    "opus"
  ],
  "forceLoginMethod": "console"
}

Reload your shell:

source ~/.zshrc
Warning
If you previously signed in to Claude Code with Anthropic, run /logout inside a Claude Code session so cached credentials do not override this configuration.

Run Claude Code

cd /path/to/your/project
claude

For a non-interactive request, use:

claude -p "Explain what this codebase does in two sentences."

Run Claude Code with another model:

claude --model "openai-gpt-4.1"

Select the text style you want. If Claude Code shows the following prompt, select Yes so that it uses your DigitalOcean key.

Do you want to use this API key?
  >  1. Yes
   2. No (recommended) ✔

Troubleshooting

Do the following if you see any of these issues:

Issue Check
Auth error or 401 Confirm MODEL_ACCESS_KEY is a valid model access key (starts with sk-do-) and is exported, and that ANTHROPIC_BASE_URL is set before starting Claude Code.
ANTHROPIC_BASE_URL ignored on setup Ensure you are on the latest version by using npm install -g @anthropic-ai/claude-code. Run /logout in an existing session before reconfiguring.
Interactive mode connecting to Anthropic directly Add export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 to your shell profile to prevent Claude Code from contacting Anthropic for non-API traffic.
Warning for Anthropic key You set both ANTHROPIC_API_KEY and ANTHROPIC_AUTH_TOKEN to your API key.
The BETA tag causes Claude Code to not authorize with DigitalOcean services. As a workaround, use export CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1.

Cline is an AI coding agent that can read and write files, run shell commands, and work through complex coding tasks. It is available as a VS Code extension and a standalone CLI for automation and CI/CD. Both interfaces use the OpenAI-compatible Chat Completions API.

This is the standard setup for day-to-day development.

  1. In VS Code, open Extensions, search for Cline, and install it.
  2. Open the Cline panel using the sidebar icon or Cline: Open from the Command Palette.
  3. Click Bring your own API key. Set API Provider to OpenAI Compatible. Then, specify the following:
    • Base URL to https://inference.do-ai.run/v1
    • API Key to your model access key
    • Model ID to a value from /v1/models (for example openai-gpt-4.1 or anthropic-claude-4.6-sonnet).

Then, click Save.

The standalone CLI is useful for headless environments, scripting, and CI/CD pipelines. Setting up Cline this way also configures the extension.

Install Cline CLI

Installing the Cline CLI requires Node.js 20 or later (22 recommended).

npm install -g cline

Verify the installation:

cline --version

Configure Cline

Run cline auth to store your credentials:

cline auth -p openai \
  -k "$MODEL_ACCESS_KEY" \
  -b "https://inference.do-ai.run/v1" \
  -m "openai-gpt-4.1"

In some versions of Cline CLI, the custom base URL is not persisted correctly by cline auth for the openai provider (cline/cline#6924). If you get auth errors pointing at api.openai.com, write the config directly. Replace <YOUR_MODEL_ACCESS_KEY> with your actual key:

mkdir -p ~/.cline/data
cat > ~/.cline/data/globalState.json <<'EOF'
{
  "apiProvider": "openai",
  "openAiBaseUrl": "https://inference.do-ai.run/v1",
  "openAiModelId": "openai-gpt-4.1"
}
EOF
cat > ~/.cline/data/secrets.json <<'EOF'
{
  "openAiApiKey": "<YOUR_MODEL_ACCESS_KEY>"
}
EOF

Run Cline

cd /path/to/your/project
# Launch cline
cline 

# Use Terminal version of cline
cline --tui

# Run with a different model
cline --tui -m "openai-gpt-4.1" 

Use Cline Kanban Mode

Click Settings in the top right and set the agent to cline. Alternatively, if you have Codex or Claude Code, you can select those agents:

  1. Install Cline CLI.
  2. Click Settings in the top right corner, then select the agent you want to use.
  3. When prompted, grant permission for the agent.

Codex CLI is OpenAI’s open-source terminal-based coding agent. It communicates using the OpenAI-compatible Chat Completions API, which the DigitalOcean AI inference endpoint supports natively.

Install Codex CLI

npm i -g @openai/codex

Verify the installation:

codex --version

Configure Codex for DigitalOcean

Codex reads ~/.codex/config.toml. Create or update this file using one of two options.

Run the following:

mkdir -p ~/.codex
cat > ~/.codex/config.toml <<'EOF'
# GLOBAL DEFAULT
model_provider = "openai_custom"
model = "openai-gpt-4.1"
preferred_auth_method = "apikey"
model_reasoning_effort = "high"
web_search = "disabled"

[model_providers.openai_custom]
name = "OpenAI Compatible"
base_url = "https://inference.do-ai.run/v1"
env_key = "MODEL_ACCESS_KEY"
wire_api = "responses"
query_params = {}
EOF

Open ~/.codex/config.toml in your editor and add the following:

# GLOBAL DEFAULT
model_provider = "openai_custom"
model = "${MODEL}"
preferred_auth_method = "apikey"
model_reasoning_effort = "high"
web_search = "disabled"

[model_providers.openai_custom]
name = "OpenAI Compatible"
base_url = "${INFERENCE_PROXY_BASE_URL}/v1"
env_key = "MODEL_ACCESS_KEY"
wire_api = "responses"
query_params = {}

The config file has the following parameters:

  • model_provider: Provider name digitalocean.
  • model: Model ID from /v1/models. For example, openai-gpt-4.1.
  • model_reasoning_effort: Reasoning effort level low, medium, high.
  • base_url: DigitalOcean AI inference endpoint https://inference.do-ai.run/v1.
  • env_key: Environment variable with the API key. Do not change the value, just point it to your DigitalOcean API key ENV name MODEL_ACCESS_KEY.

Set Your API Key

echo 'export MODEL_ACCESS_KEY="sk-do-..."' >> ~/.zshrc
source ~/.zshrc

Start Codex

cd /path/to/your/project
# run codex
codex

Run Codex With Another Model

codex -m "openai-gpt-4o-mini"

Codex prompts you for a task. All requests are routed through DigitalOcean AI.

Troubleshooting

Do the following if you see any of these issues:

Issue Check
Auth error/401 Verify MODEL_ACCESS_KEY is exported in your shell and the key is valid. Run echo $MODEL_ACCESS_KEY to confirm.
Model not found/404 Re-retrieve the model IDs from the /v1/models endpoint and fix the model value in config.toml.

Cursor is an AI-powered code editor (IDE), built as a fork of VS Code. It enables developers to write, refactor, and understand code faster using integrated LLMs like GPT-4, Claude, and Gemini.

Note
Cursor does not support custom providers like DigitalOcean through the CLI.

Access Model Settings

In the Cursor IDE, open Cursor settings by clicking on Cursor on the top left corner and selecting a gear icon. Then, select Settings.

Configure With DigitalOcean API Key

  1. Navigate to the Models tab, scroll down, and click on API Keys.
  2. Enter your Model Access Key in the OpenAI API Key field.
  3. Toggle the OpenAI API Key button and click Enable OpenAI API Key.
  4. Toggle the Override OpenAI Base URL button.
  5. Enter https://inference.do-ai.run/v1 in the Override OpenAI Base URL.

Manually Add Custom Models

DigitalOcean has not tested Cursor IDE with models other than OpenAI.

  1. Scroll up to the Model Names list.
  2. Click on View All Models.
  3. Click + Add Custom Model
  4. Type the model ID (for example openai-gpt-4o). Press Enter to save it.
  5. Toggle the newly added model to On.

See OpenClaw for product details.

Install OpenClaw

curl -fsSL https://openclaw.ai/install.sh | bash

You can also install using npm:

npm i -g openclaw

Verify the installation:

openclaw --version

Then, install the Gateway daemon: s

openclaw onboard --install-daemon

Accept the risk prompt by selecting Yes. Then, choose Quick Start:

  • When prompted for an AI Provider, select OpenAI.
  • When asked for an API key, type test and press Enter. You will overwrite this later.
  • When asked for a default model, select any model and press Enter.
  • When asked to select a channel, select skip for now. If you get an error, enter openclaw onboard --mode local --skip-channels and go through the selections again.
  • When asked for a search provider, select skip for now.
  • When asked Configure skills, select No.
  • When asked Enable hooks, select skip for now.
  • When asked How do you want to hatch your bot?, select Do this later.

Once it finishes, the daemon runs in the background with your dummy data.

Set Your API Key

Add the model access key to your ~/.zshrc (or ~/.bashrc):

export MODEL_ACCESS_KEY="sk-do-..."

Then, reload your shell:

source ~/.zshrc

Configure the Generated Configuration File

Open the configuration file in your preferred text editor (such as nano):

nano ~/.openclaw/openclaw.json

Then, customize the file. Replace the models and agents sections with your custom configuration, and add your env block. Remove the auth section. Your file should look exactly like this:

{
  "env": {
    "shellEnv": {
      "enabled": true,
      "timeoutMs": 15000
    }
  },
  "models": {
    "providers": {
      "digitalocean": {
        "baseUrl": "https://inference.do-ai.run/v1",
        "apiKey": "<your-model-access-key>",
        "api": "openai-completions",
        "models": [
          { "id": "openai-gpt-5", "name": "GPT-5" },
          { "id": "openai-gpt-4o", "name": "GPT-4o" },
          { "id": "openai-gpt-4.1", "name": "GPT-4.1" }
        ]
      },
      "do-anthropic": {
        "baseUrl": "https://inference.do-ai.run/v1",
        "apiKey": "<your-model-access-key>",
        "api": "anthropic-messages",
        "models": [
          { "id": "claude-opus-4-6", "name": "Claude Opus 4.6" },
          { "id": "claude-sonnet-4-5", "name": "Claude Sonnet 4.5" }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "digitalocean/openai-gpt-5"
      }
    }
  }
}

Save and exit the file by pressing Ctrl+X, Y, Enter.

Change the primary model to the model you want to use. For example, replace digitalocean/openai-gpt-5 with do-anthropic/claude-sonnet-4-5.

Validate and Restart Daemon

Whenever you modify openclaw.json or your .zshrc API keys, you must validate the schema and restart the background daemon so it loads the changes into memory. Fix any syntax and formatting issues:

openclaw doctor --fix

Restart the Gateway daemon:

openclaw gateway start

Test the Connection

Ping your default agent to verify the daemon successfully read your environment variable and routed the request through the inference API.

openclaw agent --agent main -m "Hello, what model are you?"

OpenClaw agents cache conversation state. If you test the main agent and later change your default model in openclaw.json, the main agent ignores the change to maintain its conversation history. To test a new default model, clear the session:

rm ~/.openclaw/agents/main/sessions/sessions.json

Then, test the connection again using openclaw agent --agent main -m "Hello, what model are you?".

See OpenCode for product details.

Install OpenCode

curl -fsSL https://opencode.ai/install | bash

For macOS or Linux, use Homebrew:

brew install anomalyco/tap/opencode

Verify the installation:

opencode --version

Set Your API Key

Add this to your ~/.zshrc (or ~/.bashrc):

export MODEL_ACCESS_KEY="sk-do-..."

Then reload:

source ~/.zshrc

Configure OpenCode

Configure OpenCode using one of the following.

mkdir -p ~/.config/opencode && cat << 'EOF' > ~/.config/opencode/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "digitalocean": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "DigitalOcean Gradient",
      "options": {
        "baseURL": "https://inference.do-ai.run/v1"
      },
      "models": {
        "openai-gpt-5.2": { "name": "GPT-5.2" },
        "openai-gpt-5": { "name": "GPT-5" },
        "openai-gpt-5.1-codex-max": { "name": "GPT-5.1 Codex Max" },
        "openai-gpt-4.1": { "name": "GPT-4.1" },
        "openai-o3": { "name": "OpenAI o3" },
        "deepseek-r1-distill-llama-70b": { "name": "DeepSeek R1 Distill Llama 70B" },
        "alibaba-qwen3-32b": { "name": "Qwen3 32B" },
        "llama3.3-70b-instruct": { "name": "Llama 3.3 70B Instruct" },
        "kimi-k2.5": { "name": "Kimi K2.5" },
        "glm-5": { "name": "glm-5" },
        "minimax-m2.5": {"name": "MiniMax M2.5"}
      }
    },
    "do-anthropic": {
      "npm": "@ai-sdk/anthropic",
      "options": {  
      "baseURL": "https://inference.do-ai.run/v1",
      "authToken": "{env:MODEL_ACCESS_KEY}",
      "setCacheKey": true
	   },
      "models": {
        "claude-opus-4-6": { "name": "Claude Opus 4.6" },
        "claude-opus-4-5": { "name": "Claude Opus 4.5" },
        "claude-sonnet-4-5": { "name": "Claude Sonnet 4.5" },
        "claude-sonnet-4-6": { "name": "Claude Sonnet 4" }
      }
    }
  },
  "model": "do-anthropic/claude-opus-4-6"
}
EOF

Create the config directory:

mkdir -p ~/.config/opencode

Create the config file:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "digitalocean": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "DigitalOcean Gradient",
      "options": {
        "baseURL": "https://inference.do-ai.run/v1"
      },
      "models": {
        "openai-gpt-5.2": { "name": "GPT-5.2" },
        "openai-gpt-5": { "name": "GPT-5" },
        "openai-gpt-5.1-codex-max": { "name": "GPT-5.1 Codex Max" },
        "openai-gpt-4.1": { "name": "GPT-4.1" },
        "openai-o3": { "name": "OpenAI o3" },
        "deepseek-r1-distill-llama-70b": { "name": "DeepSeek R1 Distill Llama 70B" },
        "alibaba-qwen3-32b": { "name": "Qwen3 32B" },
        "llama3.3-70b-instruct": { "name": "Llama 3.3 70B Instruct" },
        "kimi-k2.5": { "name": "Kimi K2.5" },
        "glm-5": { "name": "glm-5" },
        "minimax-m2.5": {"name": "MiniMax M2.5"}
      }
    },
    "do-anthropic": {
      "npm": "@ai-sdk/anthropic",
      "options": {  
      "baseURL": "https://inference.do-ai.run/v1",
      "authToken": "{env:MODEL_ACCESS_KEY}",
      "setCacheKey": true
	   },
      "models": {
        "claude-opus-4-6": { "name": "Claude Opus 4.6" },
        "claude-opus-4-5": { "name": "Claude Opus 4.5" },
        "claude-sonnet-4-5": { "name": "Claude Sonnet 4.5" },
        "claude-sonnet-4-6": { "name": "Claude Sonnet 4" }
      }
    }
  },
  "model": "do-anthropic/claude-opus-4-6"
}

Replace openai-gpt-4.1 with the model you want to use.

Source Terminal and Run OpenCode

Run:

source ~/.zshrc

Then, run OpenCode:

opencode

Install the OpenCode IDE Extension

To install OpenCode on VS Code and popular forks like Cursor, Windsurf, and VSCodium:

  1. Open the integrated terminal in VS Code.
  2. Run opencode to install the extension automatically.

If you want to use your own IDE when you run /editor or /export from the TUI, set export EDITOR="code --wait".

We can't find any results for your search.

Try using different keywords or simplifying your search terms.