# DigitalOcean Gradient™ AI Inference Hub Quickstart DigitalOcean Gradient™ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in [public preview](https://docs.digitalocean.com/platform/product-lifecycle/index.html.md#public-preview) and enabled for all users. You can [contact support](https://cloudsupport.digitalocean.com) for questions or assistance. ## Browse the Model Catalog 1. To access Model Catalog, go to the [DigitalOcean Control Panel](https://cloud.digitalocean.com) and open the **Model Catalog** tab in **Inference Hub**. 2. Browse the available foundation models. For more information about supported models and their capabilities, see our [models page](https://docs.digitalocean.com/products/gradient-ai-platform/details/models/index.html.md). 3. Click a model to open its model card and view details such as capabilities, pricing, and deployment options. 4. To [test the model](#model-playground), click **Model Playground** in the top-right corner of the model card. To learn more about browsing and filtering models, see [Browse Models in Model Catalog](https://docs.digitalocean.com/products/inference-hub/how-to/browse-model-catalog/index.html.md). ## Test a Model in the Model Playground 1. To access [Model Playground](https://docs.digitalocean.com/products/inference-hub/how-to/use-model-playground-inference/index.html.md), go to the [DigitalOcean Control Panel](https://cloud.digitalocean.com), and then open the **Model Playground** tab in **Inference Hub**. 2. Select a foundation model. For more information about supported models and their capabilities, see our [models page](https://docs.digitalocean.com/products/gradient-ai-platform/details/models/index.html.md). 3. Enter a prompt and review the model response. 4. Adjust settings such as temperature and token limits to test different outputs. ## Next Steps Once you’ve browsed models in Inference Hub, you can continue working with the following features: - [**Use Serverless Inference**](https://docs.digitalocean.com/products/inference-hub/how-to/use-serverless-inference-deployments/index.html.md): Create model access keys and send API requests to foundation models without managing infrastructure. - **[Use Dedicated Inference](https://docs.digitalocean.com/products/inference-hub/how-to/use-dedicated-inference/index.html.md):** Host open-source or commercial LLMs on dedicated GPUs, scale them, and deploy them as inference endpoints.