What is serverless inference and how it differs from dedicated inference.
Use Serverless Inference
Validated on 10 Apr 2026 • Last edited on 16 Apr 2026
DigitalOcean Gradient™ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in private preview. You can contact support for questions or assistance.
Synchronous and asynchronous API endpoints for serverless inference. Create and edit model access keys to use serverless inference endpoints. How to retrieve models available for serverless inference. Send prompts and use reasoning with the Chat Completions API. Send prompts with the Responses API. Use prompt caching with the Chat Completions and Responses API. Use reasoning with the Chat Completions and Responses API. Generate or edit images from text prompts. Generate image, audio, or text-to-speech using fal models. How to use serverless inference after updating a model.