How to Test Agentspublic

Validated on 9 Oct 2024 • Last edited on 23 Apr 2025

DigitalOcean GenAI Platform lets you build GPU-powered AI agents with fully-managed deployment. Agents can use pre-built or custom foundation models, incorporate function and agent routes, and implement RAG pipelines with knowledge bases.

The Agent Playground lets you test your agent’s performance, including how it uses knowledge bases (KBs) and agent routing. You can adjust model settings to see how they affect responses, then apply those changes to the agent if needed.

Note
All tokens used in the Agent Playground are charged the same rate as tokens used in live agent interactions.

Use Agent Playground in Control Panel

To open the Agent Playground, go to the DigitalOcean Control Panel, on the left side menu, click GenAI Platform, click the Agents tab, in the Agents page select the agent you want to test, then on the agent’s page click the Playground tab.

In the Agent Playground, you can modify the agent’s instructions in the Instructions tab and the model’s configuration in the Settings tab. You can adjust the following settings:

  • Max Tokens: Defines the maximum tokens a model processes per input or output. For model-specific details, see the models page.

  • Temperature: Controls the model’s creativity, specified as a number between 0 and 1. Lower values produce more predictable and conservative responses, while higher values encourage creativity and variation.

  • Top P: Defines the cumulative probability threshold for word selection, specified as a number between 0 and 1. Higher values allow for more diverse outputs, while lower values ensure focused and coherent responses.

  • K-Value: Controls the number of tokens to consider when selecting the next word. Higher values increase the number of tokens considered, allowing for more diverse and creative responses.

  • Retrieval Method: Provides agents with additional guidance for retrieving information and generating responses.

  • Include Citations: Adds a Message Info link below each response that allows you to see the sources, functions, and guardrails used by the agent to generate the response.

  • Agent Instructions: Context that informs the agent about its purpose and the types of information it should and shouldn’t retrieve.

See Configure Model Settings for more details about each setting.

Adjusting any of these settings immediately applies the changes to the agent’s performance in the playground. To revert any changes you’ve made to the agent, click Reset to Current Settings.

Once you’re satisfied with your agent’s settings in the playground, you can apply them to the agent by clicking the Update Settings button. This opens a confirmation prompt. Type the agent’s name to confirm the changes, then click Confirm. This updates the agent with the new settings.

View Agent Response Citations

You can view the KB sources, functions, and guardrails the agent used to generate its response.

To see the agent’s response citations, click the Include citations checkbox in the Settings tab. This adds a Message Info link below each response from the agent.

To see the citations for response, click Message Info. This opens a window and displays the following information:

  • Token Usage: The number of tokens used to generate the response, separated by the number of tokens used for the prompt and the number of tokens used for the response.
  • Retrieved data from knowledge base: A list of KBs accessed by the agent and the subsequent files retrieved from each KB to generate the response.
  • Functions used: A list of serverless functions used by the agent to generate the response.
  • Guardrails triggered: A list of guardrails that the agent encountered while generating the response, including the reason for triggering the guardrail.

We can't find any results for your search.

Try using different keywords or simplifying your search terms.