What is serverless inference and how it differs from dedicated inference.
Use Serverless Inference
Validated on 27 Apr 2026 • Last edited on 27 Apr 2026
Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.
Synchronous and asynchronous API endpoints for serverless inference. Create, scope, and manage model access keys for foundation models, inference routers, and batch inference, with VPC restrictions and team-owner visibility. How to retrieve models available for serverless inference. Send prompts and use reasoning with the Chat Completions API. Send prompts with the Responses API. Use prompt caching with the Chat Completions and Responses API. Use reasoning with the Chat Completions and Responses API. Generate or edit images from text prompts. Generate image, audio, or text-to-speech using fal models. How to use serverless inference after updating a model.