Test and Compare Models Using the Model Playground
Validated on 5 Mar 2026 • Last edited on 16 Mar 2026
DigitalOcean Gradient™ AI Inference Hub provides a single control plane for managing inference workflows. It includes a Model Catalog where you can view available foundation models, including both DigitalOcean-hosted and third-party commercial models, compare capabilities and pricing, and run inference using serverless or dedicated deployments. DigitalOcean Gradient AI Inference Hub is in public preview and enabled for all users. You can contact support for questions or assistance.
The Model Playground in Inference Hub provides an interactive interface for testing and comparing foundation models before integrating them into your applications.
In the Model Playground, you can:
- Send prompts to different models and review their responses.
- Adjust model parameters such as temperature and token limits.
- Compare outputs across models to evaluate quality and suitability.
- Explore which models best fit your use case.
To view Model Playground, go to the DigitalOcean Control Panel, in the left menu, click Inference Hub, and then click Model Playground.
Test Models
To test a foundation model, on the Model Playground page, select a model from the dropdown list and ask some questions to the model. Enter your question in the Type your message text box. To learn the best practices for how to write questions for the model, see Best Practices for Prompt Writing. Once the model responds, review the answer’s length, style, and speed.
If the model responds with too much text or you want more variability in the responses, you can change the model settings. Click the settings icon to change the following model settings:
-
Max Tokens: Defines the maximum output tokens a model processes. For model-specific details, see the models page.
-
Temperature: Controls the model’s creativity, specified as a number between 0 and 1. Lower values produce more predictable and conservative responses, while higher values encourage creativity and variation. Values are rounded to the nearest hundredth. For example, if you enter a value of
0.255, the value is rounded to0.26. -
Top P: Defines the cumulative probability threshold for word selection, specified as a number between 0 and 1. Higher values allow for more diverse outputs, while lower values ensure focused and coherent responses. Values are rounded to the nearest hundredth. For example, if you enter a value of
0.255, the value is rounded to0.26.
Next, evaluate the model responses and if needed, continue to iteratively change the model settings.
Compare Models
To compare different foundation models, on the Model Playground page, click Compare Another Model to open the comparison view. Select a model from the dropdown list and toggle the Sync Inputs option. In the text box, enter your question and press Enter. Compare the model responses and metrics, and if needed, iteratively change the model settings.