DigitalOcean AI Data Privacy
Validated on 27 Apr 2026 • Last edited on 27 Apr 2026
Inference provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare model capabilities and pricing, use routing to match inference requests to the best-fit model, and run inference using serverless or dedicated deployments.
We do not store inputs or outputs on DigitalOcean infrastructure for any models.
For more information about DigitalOcean’s security practices, see our security page.
DigitalOcean Hosted Models
For the Llama, Mistral, and DeepSeek models, input is stored in the local browser and sent to an agent’s model for inference on DigitalOcean’s infrastructure. The returned output is then stored in the local browser’s storage and displayed in the agent’s interface. If you’ve configured your agent to use prior parts of the conversation as additional context for output, the agent accesses the browser storage as necessary to retrieve the context.
Customer data submitted to DigitalOcean-hosted models for inference is not used to train, retrain, or fine-tune any models on the DigitalOcean platform. This data is also not shared with any third parties for training or fine-tuning.
For custom interfaces and applications you have developed to use DigitalOcean AI Platform, you choose where to store this data.
Third-Party Models
We do not store agent input or output when using third-party model providers such as OpenAI and Anthropic.
When using OpenAI models on DigitalOcean AI Platform, OpenAI’s zero data retention policy excludes customer content from abuse monitoring logs, treats the store parameter for /v1/responses and /v1/chat/completions as false, and may still allow certain endpoints to store limited application state.
When using Anthropic models, Anthropic applies a Zero Retention Policy and deletes prompts and outputs after generation, except where required by law or to address malicious use. Anthropic may run risk classification models on prompts and outputs to detect potential Acceptable Use Policy or agreement violations.
Data sent to other third-party model providers is handled according to the applicable provider’s policies.
Batch Inferencing
We are a pass-through for OpenAI and Anthropic models and do not retain your batch inputs or outputs. OpenAI and Anthropic storage policies apply.
| Data | Retention |
|---|---|
| Output and error result files | 29-30 days from job completion |
| Prompts and request content | Not stored for training or any purpose beyond the current batch job |
| Policy-violating content (evidence) | Retained for up to 1 year for law enforcement compliance; never accessible to end users |
After the retention window, all input and output artifacts are permanently purged and cannot be recovered. Download your results before the expiration window closes.