DigitalOcean Gradient™ AI Inference Hub How-Tos

Generated on 17 Mar 2026

DigitalOcean Gradient™ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in public preview and enabled for all users. You can contact support for questions or assistance.

Manage Model Catalog

Browse Models in Model Catalog

Identify the right model for your use case by filtering available foundation models by capabilities and price.

Use Model Playground

Test and Compare Models Using the Model Playground

Test and compare foundation models in the Model Playground.

Manage Inference Deployments

Use Serverless Inference

Create model access keys that allow you to send requests to foundation models without creating an agent.

Use Dedicated Inference

How to Use Dedicated Inference on DigitalOcean Gradient™ AI Inference Hub

Deploy open-source and commercial LLMs on dedicated GPUs as an inference endpoint.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.