How to Browse Models in Model Catalog

Validated on 27 Apr 2026 • Last edited on 27 Apr 2026

Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.

Model Catalog provides a list of available foundation models and lets you filter and review model attributes before deployment, so you can compare pricing, capabilities, and supported features to determine which model fits your use case. You can also browse Model Catalog through an MCP server using the Model Catalog MCP tools.

Note
We don’t guarantee the accuracy, safety, or reliability of third-party models listed. You are responsible for evaluating model suitability for your specific use cases.

To view the Model Catalog, go to the DigitalOcean Control Panel, in the left menu, click INFERENCE, and then click Model Catalog.

Browse the Model Catalog

In the Model Catalog tab, you see all available commercial and DigitalOcean-hosted foundation models. The catalog displays the following information for each model:

  • Name: The model name, provider, and model ID used for API requests.
  • Type: The model classification, such as text, multimodal, image generation, and embedding.
  • Capabilities: Supported features such as reasoning, agentic tasks & tool use, coding, chat, and more.
  • Benchmarks: Standardized scores that help compare the model’s performance on standard tests.
  • Price: Input and output token pricing.
  • Availability: How the model can be deployed, such as Serverless or Dedicated.

Click either the Name or Price headers to sort the catalog in ascending or descending order, or at the top of the catalog, type in the Search by Model name search bar to locate it by name.

Filter Model Catalog

To filter the model catalog by certain attributes, in the top-right, click Filters, and then click the attributes you want to filter the catalog by. You can filter the following:

  • Availability: How the model can be deployed, such as Serverless or Dedicated.
  • Provider: The organization that develops or hosts the model, such as OpenAI, Meta, DeepSeek, Anthropic, Alibaba, or Mistral AI.
  • Type: The model classification, such as Reasoning, Chat, Image, or Embedding.

Either, click Reset to defaults to reset the filters you chosen, or click a model in the catalog to open its model card. For a full list of supported models, see our models page.

View Model Card

Each model card includes a Model Details tab with information about the model and an API Usage tab with endpoint and usage guidance to help you evaluate suitability for your workload.

To test the model, on the top-right of the model card, click Launch Playground to test and compare models using the Model Playground.

View Model Details

Under the Model Details tab includes:

  • Description of the model, including its intended use cases and key characteristics.
  • Governing terms, such as AI Model Terms, Service Terms, or Terms of Service Agreements.
  • Type: Model classification such as text, multimodal, text & vision reasoning, image generation, embedding.
  • Content Length: Maximum supported context window or input token limit.
  • Parameters: Approximate number of model parameters, indicating model size if available.
  • Input Cost: Price per input token. See the pricing page for more information.
  • Output Cost: Price per output token generated by the model. See the pricing page for more information.
  • Input Capabilities: Supported input modalities, such as text, documents, image, audio, video.
  • Output Capabilities: Types of outputs the model can generate, such as text, image, video, audio, embeddings.

Use this information to compare models based on cost, feature support, and deployment options.

Under the Availability section, see how you can deploy the model:

  • Serverless Inference: To use the model with serverless inference, on the right, click Create key, and then follow set up serverless inference to authenticate and send requests to the model.
  • Dedicated Inference: To deploy the model with dedicated inference, on the right, click Deploy endpoint, and then follow set up dedicated inference to configure and deploy the endpoint.

View API Usage

The API Usage tab includes:

  • CHAT COMPLETIONS: Shows how to send chat-style prompts to the model using the /v1/chat/completions endpoint. This section includes required parameters such as model, messages, temperature, and max_completion_tokens, along with example requests.

For complete request examples and parameter details, see Send Prompt to a Model Using the Chat Completions API.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.