- GradientAI Platform Concepts
- GradientAI Platform Features
- Available Foundation and Embedding Models
- GradientAI Platform Quickstart
- Agent Instructions Best Practices
- GradientAI Platform Pricing
- GradientAI Platform Limits
- How to Use Serverless Inference
- Function Instructions Best Practices
- GradientAI Data Privacy
GradientAI Platform
Generated on 8 Jul 2025
GradientAI Platform lets you build fully-managed AI agents with knowledge bases for retrieval-augmented generation, multi-agent routing, guardrails, and more, or use serverless inference to make direct requests to popular foundation models.
Quickstarts and intermediate tutorials to get started.
How to accomplish specific tasks in detail, like creation/deletion, configuration, and management.
Native and third-party tools, troubleshooting, and answers to frequently asked questions.
Explanations and definitions of core concepts in GradientAI Platform.
Features, plans and pricing, availability, limits, known issues, and more.
Get help with technical support and answers to frequently asked questions.
Latest Updates
2 July 2025
-
Agent tracing and conversation logs are now in public preview for GradientAI Platform. This allows you to review how your agents process prompts, including input and output content, tool calls, knowledge base retrievals, and processing times.
5 June 2025
-
Serverless inference is now available on GradientAI Platform. Serverless inference lets you to get direct responses from foundation models using a single API endpoint without creating an agent.
29 April 2025
-
You can now view token usage and performance metrics for GradientAI agents.
For more information, see all GradientAI Platform release notes.