GenAI Platform Limits Public Preview

DigitalOcean GenAI Platform lets you build GPU-powered AI agents with fully-managed deployment. Agents can use pre-built or custom foundation models, incorporate function and agent routes, and implement RAG pipelines with knowledge bases.


Limits

  • Currently, the GenAI Platform is only available in Toronto.
  • You cannot currently use doctl, the official DigitalOcean CLI, to manage your GenAI Platform resources.
  • You are not able to bring your own models to use for your Artificial Intelligence (AI) agents. You can see the models we currently offer on our model overview page.
  • You currently cannot change embedding models after creating a knowledge base.
  • In the Model Playground, teams have a daily token limit for each model. These tokens are separate from agent tokens and replenish every 24 hours. For example, if you use 500 tokens testing the DeepSeek model at 09:05 on Wednesday, those tokens are replenished at 09:05 on Thursday. Token limits are not exposed to users, but if you regularly reach this limit during testing, you can request an increase by contacting our Support team.
  • When creating a Knowledge Base (KB), you can either select a DigitalOcean Space where your data is located or upload local files as the data source. You must first upload your data in a DigitalOcean Space, then create the KB to attach to your model using the GenAI Platform.
  • Currently, we only support web functions for function routing from agents.
  • If you have a public agent that calls a private function, the private function can be called by anyone having the function’s URL. We recommend setting your function to Secure Web Function which enables authentication. For more information on configuring the function’s access and security, see Access and Security in the Functions documentation.
  • To manage compute resources and ensure fair resource distribution, GenAI Platform has limits on resource creation and model usage. If you require a limit increase, contact support.
  • For web crawling a data source, we limit the total number of crawled pages to 5500 to prevent excessively large indexing jobs.
  • For web crawling a data source, you cannot currently re-index a previously crawled seed URL. To re-index the content, delete the seed URL as a data source and re-add it to start a new crawl.

  • Currently, you cannot customize detection rules for guardrails.
In this article...