Inference Details

Generated on 28 Apr 2026

Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.

Inference Features

Inference provides a interface for managing inference workflows.

Inference Pricing

Inference itself has no cost. Pricing depends on the features you use within Inference, such as serverless inference or dedicated deployments, and the models and resources used for those requests.

Inference Availability

Regional datacenter availability for Inference.

Inference Limits

Limits and known issues for Inference.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.