GradientAI Data Privacy

Validated on 12 Feb 2025 • Last edited on 8 Jul 2025

GradientAI Platform lets you build fully-managed AI agents with knowledge bases for retrieval-augmented generation, multi-agent routing, guardrails, and more, or use serverless inference to make direct requests to popular foundation models.

We do not store agent inputs or outputs on DigitalOcean infrastructure for any models. For more information about DigitalOcean’s security practices, see our security page.

DigitalOcean Hosted Models

For the Llama, Mistral, and DeepSeek models, input is stored in the local browser and sent to an agent’s model for inference on DigitalOcean’s infrastructure. The returned output is then stored in the local browser’s storage and displayed in the agent’s interface. If you’ve configured your agent to use prior parts of the conversation as additional context for output, the agent accesses the browser storage as necessary to retrieve the context. For custom interfaces and applications you have developed to use GradientAI Platform, you choose where to store this data.

Third-Party Models

We do not store input or output for third-party model providers like Anthropic. Data sent to third-party models is stored in accordance with that model provider’s policies. Reference your model provider’s policies for more information.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.