Getting Started with Inference

Validated on 20 Apr 2026 • Last edited on 23 Apr 2026

Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.

Inference Quickstart

Browse models, test them in the Model Playground, and send your serverless or dedicated inference requests in a few minutes.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.