Test DigitalOcean Knowledge Base Retrieval Using RAG Playground
Validated on 14 Apr 2026 • Last edited on 27 Apr 2026
DigitalOcean Knowledge Bases let you store, index, and retrieve data from private files, websites, Spaces buckets, and other sources to power retrieval-augmented generation with your own content.
RAG Playground provides an interface where you can test how foundation models answer questions using content retrieved from a knowledge base. You can run queries against a knowledge base, review the generated answer, inspect the retrieved chunks, and adjust model settings to evaluate different results. You can use RAG Playground to test retrieval and responses before attaching a knowledge base to an agent.
RAG Playground usage is billed based on the selected serverless inference model. For more information, see our pricing page.
To open RAG Playground, go to the DigitalOcean Control Panel, in the left menu, click Agent Platform, click the Knowledge Bases tab, click the knowledge base you want to test, and then click the RAG Playground tab.
Configure RAG Playground
To configure RAG Playground, in the right, click the Instructions tab, and then under System Instructions, enter instructions that define how the selected serverless inference model should respond, such as tone, format, constraints, or task-specific behavior. Keep instructions clear and specific, and include only the guidance needed for the model’s role. For best practices and examples, see System Instructions Best Practices.
Afterwards, click Update instructions to confirm your instructions.
Then, click the Settings tab and configure these settings if needed:
- Max Tokens: The maximum number of tokens the selected serverless inference model can generate in its response. Use the Max Tokens slider to set the limit. Increase it for longer, more detailed responses and lower it for shorter responses. Larger responses can increase usage and cost.
- Temperature: Controls how consistent or varied the selected serverless inference model’s responses are. Use the Temperature slider to set the value. Lower values produce more predictable responses, while higher values allow for more variation.
After adding your system instructions and configuring your settings, run the query to compare the results.
Test Queries Against a Knowledge Base
To test a query, in the top-left, click the model dropdown list, and then choose a model to use for testing your knowledge base. The selected model uses the content retrieved from the current knowledge base to generate a response.
In the Chat with your agent field of the message box, enter the prompt you want to query. For prompt-writing guidance and examples, see Prompts Best Practices. Then, click ↑ to send your prompt.
After the response is generated, review the retrieved data below the message box. Each chunk shows its source, page number when available, and whether it was used to generate the response.
After testing your knowledge base, you can view indexing jobs, retrieve data, or manage data sources.