How to Test Agentspublic
Validated on 9 Oct 2024 • Last edited on 20 May 2025
The DigitalOcean GenAI Platform lets you work with popular foundation models and build GPU-powered AI agents with fully-managed deployment, or send direct requests using serverless inference. Create agents that incorporate guardrails, functions, agent routing, and retrieval-augmented generation (RAG) pipelines with knowledge bases.
The Agent Playground lets you test your agent’s performance, including how it uses knowledge bases and agent routing. You can adjust model settings to see how they affect responses, then apply those changes to the agent if needed.
Use Agent Playground in Control Panel
To open the Agent Playground, go to the DigitalOcean Control Panel, on the left side menu, click GenAI Platform, click the Agents tab, in the Agents page select the agent you want to test, then on the agent’s page click the Playground tab.
In the Agent Playground, you can modify the agent’s instructions in the Instructions tab and the model’s configuration in the Settings tab. You can adjust the following settings:
-
Max Tokens: Defines the maximum output tokens a model processes.
To find the maximum input tokens for a model, check the model’s documentation for its context length. For other model-specific details, see the models page.
-
Temperature: Controls the model’s creativity, specified as a number between 0 and 1. Lower values produce more predictable and conservative responses, while higher values encourage creativity and variation.
-
Top P: Defines the cumulative probability threshold for word selection, specified as a number between 0 and 1. Higher values allow for more diverse outputs, while lower values ensure focused and coherent responses.
-
K-Value: Controls the number of tokens to consider when selecting the next word. Higher values increase the number of tokens considered, allowing for more diverse and creative responses.
-
Retrieval Method: Provides agents with additional guidance for retrieving information and generating responses.
-
Include Citations: Adds a Message Info link below each response that allows you to see the sources, functions, and guardrails used by the agent to generate the response.
-
Agent Instructions: Context that informs the agent about its purpose and the types of information it should and shouldn’t retrieve.
See Configure Model Settings for more details about each setting.
Adjusting any of these settings immediately applies the changes to the agent’s performance in the playground. To revert any changes you’ve made to the agent, click Reset to Current Settings.
Once you’re satisfied with your agent’s settings in the playground, you can apply them to the agent by clicking the Update Settings button. This opens a confirmation prompt. Type the agent’s name to confirm the changes, then click Confirm. This updates the agent with the new settings.
View Agent Response Citations
You can view the knowledge base sources, functions, and guardrails the agent used to generate its response.
To see the agent’s response citations, click the Include citations checkbox in the Settings tab. This adds a Message Info link below each response from the agent.
To see the citations for response, click Message Info. This opens a window and displays the following information:
- Token Usage: The number of tokens used to generate the response, separated by the number of tokens used for the prompt and the number of tokens used for the response.
- Retrieved data from knowledge base: A list of knowledge bases accessed by the agent and the subsequent files retrieved from each knowledge base to generate the response.
- Functions used: A list of serverless functions used by the agent to generate the response.
- Guardrails triggered: A list of guardrails that the agent encountered while generating the response, including the reason for triggering the guardrail.