# DigitalOcean Gradientâ„¢ AI Inference Hub How-Tos Generated on 17 Mar 2026 DigitalOcean Gradientâ„¢ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in [public preview](https://docs.digitalocean.com/platform/product-lifecycle/index.html.md#public-preview) and enabled for all users. You can [contact support](https://cloudsupport.digitalocean.com) for questions or assistance. ## Manage Model Catalog [Browse Models in Model Catalog](https://docs.digitalocean.com/products/inference-hub/how-to/browse-model-catalog/index.html.md): Identify the right model for your use case by filtering available foundation models by capabilities and price. ## Use Model Playground [Test and Compare Models Using the Model Playground](https://docs.digitalocean.com/products/inference-hub/how-to/use-model-playground-inference/index.html.md): Test and compare foundation models in the Model Playground. ## Manage Inference Deployments [Use Serverless Inference](https://docs.digitalocean.com/products/inference-hub/how-to/use-serverless-inference-deployments/index.html.md): Create model access keys that allow you to send requests to foundation models without creating an agent. ## Use Dedicated Inference [How to Use Dedicated Inference on DigitalOcean Gradientâ„¢ AI Inference Hub](https://docs.digitalocean.com/products/inference-hub/how-to/use-dedicated-inference/index.html.md): Deploy open-source and commercial LLMs on dedicated GPUs as an inference endpoint.