Browse models, test them in the Model Playground, and send your serverless or dedicated inference requests in a few minutes.
Getting Started with Inference
Validated on 20 Apr 2026 • Last edited on 23 Apr 2026
Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.
Inference Quickstart