How to Retrieve Available Models
Validated on 10 Apr 2026 • Last edited on 16 Apr 2026
DigitalOcean Gradient™ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in private preview. You can contact support for questions or assistance.
The following cURL, Python OpenAI, Gradient Python SDK, and PyDo examples show how to retrieve models available for serverless inference.
Send a GET request to the /v1/models endpoint using your model access key. For example:
curl -X GET https://inference.do-ai.run/v1/models \
-H "Authorization: Bearer $MODEL_ACCESS_KEY" \
-H "Content-Type: application/json"This returns a list of available models with their corresponding model IDs (id):
...
{
"created": 1752255238,
"id": "alibaba-qwen3-32b",
"object": "model",
"owned_by": "digitalocean"
},
{
"created": 1737056613,
"id": "anthropic-claude-3.5-haiku",
"object": "model",
"owned_by": "anthropic"
},
...from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(
base_url="https://inference.do-ai.run/v1/",
api_key=os.getenv("MODEL_ACCESS_KEY"),
)
models = client.models.list()
for m in models.data:
print("-", m.id)from gradient import Gradient
from dotenv import load_dotenv
import os
load_dotenv()
client = Gradient(model_access_key=os.getenv("MODEL_ACCESS_KEY"))
models = client.models.list()
print("Available models:")
for model in models.data:
print(f" - {model.id}")from pydo import Client
from dotenv import load_dotenv
import os
load_dotenv()
client = Client(token=os.getenv("MODEL_ACCESS_KEY"))
models = client.inference.list_models()
print("Available models:")
for model in models["data"]:
print(f" - {model['id']}")