How to Use Serverless Inference After Updating to Another Model

Validated on 10 Apr 2026 • Last edited on 16 Apr 2026

DigitalOcean Gradient™ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in private preview. You can contact support for questions or assistance.

If you change the foundation model at any time, you must take the following steps:

  • Update the model ID in your CLI/API calls, serverless inference requests, and ADK code: Update the model ID parameter in your code to the new model ID.

  • Review prompt logic: While new models are largely backward compatible, we recommend reviewing your system prompts, as the new model follows instructions more precisely. You may need to adjust your prompts to get the desired response format.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.