# DigitalOcean Gradientâ„¢ AI Inference Hub DigitalOcean Gradientâ„¢ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in [public preview](https://docs.digitalocean.com/platform/product-lifecycle/index.html.md#public-preview) and enabled for all users. You can [contact support](https://cloudsupport.digitalocean.com) for questions or assistance. [Browse Models in Model Catalog](https://docs.digitalocean.com/products/inference-hub/how-to/browse-model-catalog/index.html.md): Identify the right model for your use case by filtering available foundation models by capabilities and price. [Use Model Playground](https://docs.digitalocean.com/products/inference-hub/how-to/use-model-playground-inference/index.html.md): Test and compare foundation models in the Model Playground. [Use Serverless Inference](https://docs.digitalocean.com/products/inference-hub/how-to/use-serverless-inference-deployments/index.html.md): Create model access keys that allow you to send requests to foundation models without creating an agent. [Deploy to Dedicated Inference Endpoints](https://docs.digitalocean.com/products/inference-hub/how-to/use-dedicated-inference/index.html.md): Deploy open-source and commercial LLMs on dedicated GPUs as an inference endpoint. ## Latest Updates ### 16 March 2026 - DigitalOcean Gradientâ„¢ AI Inference Hub is now available in [public preview](https://docs.digitalocean.com/platform/product-lifecycle/index.html.md#public-preview) and is enabled for all users. Inference Hub provides access to a [catalog of foundation models](https://docs.digitalocean.com/products/inference-hub/how-to/browse-model-catalog/index.html.md) with support for [serverless inference](https://docs.digitalocean.com/products/inference-hub/how-to/use-serverless-inference-deployments/index.html.md) and [dedicated inference](https://docs.digitalocean.com/products/inference-hub/how-to/use-dedicated-inference/index.html.md), along with a [Model Playground for testing models](https://docs.digitalocean.com/products/inference-hub/how-to/use-model-playground-inference/index.html.md) before deployment. During the public preview period, features and model availability may change. For more information, see [the full release notes](https://docs.digitalocean.com/release-notes/).