Inference Release Notes

Validated on 1 May 2026

May 2026

1 May

April 2026

28 April

  • Dedicated Inference is now in General Availability.

  • You can now browse Model Catalog through a DigitalOcean MCP server.

  • Batch inference lets you submit text-only batch jobs for OpenAI and Anthropic models. Using batch inference significantly reduces cost compared to real-time inference. For more information, see Use Batch Inference.

  • The following Google model is now available on DigitalOcean Inference for serverless inference:

    For more information, see the Available Models page.

  • Bring Your Own Models (BYOM) is now available in Model Catalog. You can import models from Hugging Face or Spaces buckets or folders. For details, see Import a Model.

  • Model Catalog is now in General Availability.

  • You can now evaluate models available for serverless inference, inference routers, and dedicated inference deployments using a judge model. Scoring includes metrics such as correctness, completeness, ground truth faithfulness, and safety metrics. This features is in public preview. You can opt in from the Feature Preview page. For more information, see Evaluate Models.

  • We now support multimodal models for serverless inference. Multimodal models process and generate content across multiple data types, including images, audio, video, and text, thus enabling a much broader range of real-world applications, including document intelligence, voice agents, content generation, and accessibility tools. For more information, see Use Multimodal Inference.

  • The Model Playground now supports the following features when testing and comparing models:

    • Uploading images from local storage

    • Generating multimodal artifacts, such as images, audio, and text-to-speech, from models that support it

    Read Test and Compare Models for more information.

  • The following NVIDIA model is now available on Inference for serverless inference:

    For more information, see the Available Models page.

  • You can now use DigitalOcean personal access tokens for authenticating serverless inference requests. You can use a personal access token as an alternative to a model access key when sending requests to the serverless inference API. Model access keys remain recommended when you need per-application scoping, VPC restriction, or credentials dedicated to inference workloads. For more information, see Serverless Inference Overview.

  • The following models are now available on DigitalOcean AI Inference for serverless inference:

    For more information, see the Foundation models page.

  • As part of the DigitalOcean AI-Native Cloud, DigitalOcean AI Inference Hub is now Inference.

  • Inference Router in now available in public preview and enabled for all users. Using this feature, you can use multiple models in a model pool to configure routing rules and selection policy for inference requests. We provide pre-built templates or you can define custom task-matching logic using natural language, with configurable fallback support for reliability. For more information, see Inference Router.

  • DigitalOcean AI Inference now supports scoped model access keys. When you create a key, you can limit it to specific foundation models and inference routers, enable batch inference, and restrict it to a VPC network so that only requests from that VPC network can authenticate. Team owners can also view and manage keys created by other team members. Previously created keys continue to authenticate without changes. For more information, see Model Access Keys.

27 April

23 April

16 April

3 April

  • The following models are deprecated from the Model Catalog:

    • Meta Llama 3.1 8B-Instruct
    • Mistral NeMo

    Migrate to Llama 3.3 70B-Instruct (llama3.3-70b-instruct) and gpt-oss-20b (openai-gpt-oss-20b) models respectively, to avoid service disruption.

2 April

1 April

March 2026

27 March

17 March

16 March

We can't find any results for your search.

Try using different keywords or simplifying your search terms.