DigitalOcean Gradient™ AI Data Privacy

Validated on 12 Feb 2025 • Last edited on 10 Nov 2025

DigitalOcean Gradient™ AI Platform lets you build fully-managed AI agents with knowledge bases for retrieval-augmented generation, multi-agent routing, guardrails, and more, or use serverless inference to make direct requests to popular foundation models.

We do not store agent inputs or outputs on DigitalOcean infrastructure for any models. For more information about DigitalOcean’s security practices, see our security page.

DigitalOcean Hosted Models

For the Llama, Mistral, and DeepSeek models, input is stored in the local browser and sent to an agent’s model for inference on DigitalOcean’s infrastructure. The returned output is then stored in the local browser’s storage and displayed in the agent’s interface. If you’ve configured your agent to use prior parts of the conversation as additional context for output, the agent accesses the browser storage as necessary to retrieve the context. For custom interfaces and applications you have developed to use Gradient AI Platform, you choose where to store this data.

Third-Party Models

We do not store agent input or output for model providers like Anthropic and OpenAI. OpenAI’s Zero Data Retention policy applies when using OpenAI commercial models on the Gradient AI Platform. Data sent to other third-party models is stored in accordance with the model provider’s policies. For more information, see the specific model provider’s policy.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.