Best Practices for Context Management when Generating Code with AI Agents
Validated on 6 Aug 2025 • Last edited on 13 Aug 2025
A large language model’s (LLM) context is data that it uses when processing requests to generate more relevant responses. For example, an AI agent’s context can include can include data from the end user (like their prompt or the preceding conversation) and data from its configuration (like relevant documentation or available tool calls).
LLMs work best with focused, relevant information. Poor context can mean:
- Insufficient context, or too little information. This may cause AI agents to hallucinate, increasing the likelihood of low quality responses with nonexistent APIs and packages, incorrect configurations, or generic boilerplate code.
- Context overflow, or too much information. Context overflow can cause AI agents to hit token limits and provide unfocused responses. Chroma’s research on context rot discusses how context degradation affects AI model performance over time.
We recommend the following structured approach for effective context management:
- Start with a comprehensive product requirements document (PRD).
- Divide the PRD into focused tasks.
- Execute each task one at a time with new context.
- Maintain a focused working context
- Avoid mixing requirements, files, and goals
- Use
llms.txt
Following these steps for context management can improve the quality of AI-generated code.
Start with a Clear Product Requirements Document (PRD)
Begin every project with a focused PRD that defines the following:
# Task Management App PRD
## Objective
Create a simple task management application for small teams.
## Core Features
- User authentication (email/password)
- Create, edit, delete tasks
- Assign tasks to team members
- Mark tasks as complete
## Technical Requirements
- Next.js frontend
- PostgreSQL database
- Deploy to DigitalOcean App Platform
- Support up to 50 concurrent users
## Out of Scope
- Real-time collaboration
- File attachments
- Advanced reporting
- Mobile app
Break Large Tasks into Focused Issues
Next, create specific, actionable issues based on the PRD.
The following prompt is not specific and leads to poor results:
Build a complete e-commerce platform with user management, product catalog,
shopping cart, payment processing, order management, inventory tracking, admin
dashboard, analytics, and mobile responsiveness.
Instead, prompt with one task at a time, including all relevant requirements and specifications:
Create a PostgreSQL database schema for product catalog with the following
entities:
- Products (id, name, description, price, stock_quantity, created_at)
- Categories (id, name, slug, description)
- Product_Categories (product_id, category_id)
Include appropriate indexes and constraints.
Work Issue-by-Issue with Focused Prompts
Process one issue at a time with clear, specific instructions:
## Current Task: User Authentication API
### Context
- Next.js application using App Router
- PostgreSQL database already configured
- Using bcrypt for password hashing
- JWT tokens for session management
### Requirements
- POST /api/auth/register endpoint
- POST /api/auth/login endpoint
- Input validation and error handling
- Password strength requirements
### Constraints
- Follow existing project structure in /app/api/
- Use the database connection pattern from /lib/db.js
- Return consistent JSON error format
Maintain Focused Working Context
Different tasks require different types of context. For authentication tasks, you need user management patterns and security libraries. For database work, you need schema designs and ORM patterns. For API development, you need routing conventions and error handling approaches. Keep your working context tailored to the specific task at hand.
You can provide a user guide or documentation chapter as context. Alternatively, you can use Model Context Protocol (MCP) tools like Context 7 to automatically fetch relevant context based on your current task.
Consider writing test cases first. Tests help AI agents understand your expected behavior and generate more accurate implementations.
Context Guidelines:
- One Task Focus: Don’t mix authentication with database schema changes
- Relevant Code Only: Include files directly related to current work
- Clear Objectives: State exactly what you want accomplished
- Start Fresh: Begin new sessions to clear context when switching between major features
Avoid Mixing Requirements, Files, and Goals
When working with language models, providing focused and relevant context helps generate accurate and useful responses. Instead of mixing goals and code, isolate one task at a time and include only the necessary details.
The following is an example of an unfocused prompt that mixes unrelated file context with requirements and goals:
Here's my entire codebase (50 files), I want to add user authentication, also
fix the database performance issues, and can you also help me deploy this to
production, plus I'm thinking about adding real-time features later, what do you
think about WebSockets vs Server-Sent Events?
This prompt is focused on one task, includes relevant code only, and includes specific requirements:
I need to add JWT-based authentication to my Next.js API. Here's the current
user model and database connection pattern:
[Relevant code files only]
Create login and register endpoints that:
1. Validate input data
2. Hash passwords with bcrypt
3. Return JWT tokens
4. Follow the existing error handling pattern
DigitalOcean App Platform llms.txt
llms.txt
is a standardized file format designed to provide website contents in a format that is convenient for LLMs. An llms.txt
file for App Platform is available at https://docs.digitalocean.com/products/app-platform/llms.txt
.
This file provides a comprehensive overview of the App Platform documentation including features, sample apps, best practices, and support articles.
To use the llms.txt
file, download it and place it in your project root so AI agents can reference it.
Benefits of Using this llms.txt
File
- Accurate Platform Context: AI agents get precise information about App Platform capabilities
- Reduces Hallucinations: Prevents AI from inventing non-existent APIs or features
- Better Architecture Decisions: AI suggests appropriate App Platform components for your use case
- Consistent Documentation References: AI provides links to actual DigitalOcean documentation
Conclusion
Effective context management is a skill that improves with practice. Follow these guidelines and use the DigitalOcean App Platform llms.txt
file to get better results from AI agents and spend less time debugging generated code.