How to Retrieve Available Models
Validated on 27 Apr 2026 • Last edited on 27 Apr 2026
Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.
The following cURL, Python OpenAI, Gradient Python SDK, and PyDo examples show how to retrieve models available for serverless inference.
Send a GET request to the /v1/models endpoint using your model access key. For example:
curl -X GET https://inference.do-ai.run/v1/models \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Content-Type: application/json"This returns a list of available models with their corresponding model IDs (id):
...
{
"created": 1752255238,
"id": "alibaba-qwen3-32b",
"object": "model",
"owned_by": "digitalocean"
},
{
"created": 1737056613,
"id": "anthropic-claude-3.5-haiku",
"object": "model",
"owned_by": "anthropic"
},
...from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(
base_url="https://inference.do-ai.run/v1/",
api_key=os.getenv("MODEL_ACCESS_KEY"),
)
models = client.models.list()
for m in models.data:
print("-", m.id)from gradient import Gradient
from dotenv import load_dotenv
import os
load_dotenv()
client = Gradient(model_access_key=os.getenv("MODEL_ACCESS_KEY"))
models = client.models.list()
print("Available models:")
for model in models.data:
print(f" - {model.id}")from pydo import Client
from dotenv import load_dotenv
import os
load_dotenv()
client = Client(token=os.getenv("MODEL_ACCESS_KEY"))
models = client.inference.list_models()
print("Available models:")
for model in models["data"]:
print(f" - {model['id']}")