google angular agents.md Best Practices
This document provides best practices and a reusable AGENTS.md template for integrating LLM-based agents into Angular projects.
It draws inspiration from the official Angular AI Developer Guide, especially the sections on LLM prompts and AI IDE setup.
As AI/LLM-driven agents become part of both development workflows and runtime experiences, maintaining consistency, safety, and clarity becomes crucial.
The following guidelines define how to design, implement, test, and document these agents in an Angular context.
1. Define the Role of Agents
- Clearly define what an agent is in your project:
- Developer tools (e.g., code suggestions, refactors)
- CI/CD bots
- Runtime chat assistants
- Background automation services
- Distinguish agent logic from ordinary application logic.
- Document responsibilities, boundaries, and fallback rules for each agent.
2. Prompt Design & System Instructions
- Maintain a central system prompt file (e.g.
ai/system-prompts.md). - Include framework-specific conventions from Angular’s AI guide:
- Use signals and standalone components.
- Avoid
any; prefer strict typing. - Follow Angular’s recommended architectural patterns.
- Version prompts and review them via pull requests.
- Track rationale for every prompt update.
Example system prompt:
"You are an expert Angular developer. Generate code that follows Angular best practices,
uses standalone components, signals, and strict typing. If uncertain, respond with INSUFFICIENT_DATA."
3. Context & Memory Management
-
Summarize or truncate context to avoid exceeding token limits.
-
Use structured memory objects instead of unstructured text blobs:
tsinterface AgentMemory { id: string; timestamp: string; type: string; content: string; } -
Apply sliding window memory for conversational agents.
-
Periodically prune irrelevant memory to maintain focus and performance.
4. Safety, Validation & Fallbacks
-
Validate every agent output using:
-
JSON Schema or TypeScript type guards.
-
Static analysis before execution.
-
Define fallback strategies for failed or uncertain responses:
-
Human review
-
Default value or safe response
-
Retry with simplified context
-
-
Log sanitized input/output for audit and debugging.
5. Modularity & Encapsulation
-
Isolate agent logic inside dedicated modules or services:
AgentServicePromptManagerAgentClient
-
Keep UI components free from prompt construction.
-
Provide a well-typed, single entry point for agent interactions:
interface AgentClient {
invoke<TInput, TOutput>(
task: string,
input: TInput,
context?: AgentContext
): Promise<AgentResponse<TOutput>>;
}
6. Caching & Reuse
- Cache deterministic results to reduce latency and cost.
- Compute cache keys using
(task + input + contextHash). - Invalidate cache on source or version changes.
- Optionally store cached responses in Redis, IndexedDB, or local storage depending on context.
7. Testing & Monitoring
-
Stub or mock the LLM interface for deterministic tests.
-
Cover both success and failure paths.
-
Measure:
- Latency
- Confidence score
- Error rate
-
Periodically evaluate output quality using tools like Web Codegen Scorer.
-
Schedule prompt drift reviews every sprint or release cycle.
8. Versioning & Governance
- Version prompt files together with source code.
- Use feature flags to enable/disable agents in staging or production.
- Maintain a prompt changelog (e.g.,
PROMPTS_CHANGELOG.md). - Restrict prompt editing permissions to reviewed contributors.
9. Security & Privacy
- Never embed secrets or API keys inside prompts or memory.
- Redact all PII or sensitive content before sending to LLM APIs.
- Prevent agents from making arbitrary network or file operations.
- Limit data logging and encrypt logs when needed.
10. Documentation & Onboarding
-
Place this document in your repo as
AGENTS.mdand link it fromREADME.md. -
Include:
- Example prompts
- Output validation schema
- Debugging and fallback instructions
-
Maintain a quickstart guide under
docs/agents-quickstart.mdfor new developers.
11. Example AGENTS.md Template
Below is a ready-to-use AGENTS.md structure you can adapt for your Angular project.
1# AGENTS.md — Angular Agents Implementation Guide
2
3## 1. Purpose
4Defines conventions and policies for using LLM-powered agents in this project.
5
6## 2. Agent Scenarios
7
8| Scenario | Description | Constraints |
9|-----------|--------------|-------------|
10| Developer Tooling | Codegen, lint fixes | Offline or sandboxed |
11| CI Bots | Dependency updates | Deterministic, reviewed |
12| Runtime Chat | In-app assistant | Validated, rate-limited |
13| Background Agent | Batch summarization | Secure, cached |
14
15## 3. Prompt Management
16- Canonical prompt file: `ai/system-prompts.md`
17- All prompts reference the canonical system instructions.
18- Prompts must be reviewed and versioned with the codebase.
19
20## 4. Agent Interfaces
21```ts
22interface AgentClient {
23 invoke(task: string, input: TInput): Promise<AgentResponse>;
24}
25```
26
27## 5. Validation & Fallbacks
28
29* Validate output via schema or types.
30* Fallback: human review → stub response → retry.
31
32## 6. Caching
33
34* Cache by `(task + contextHash)` key.
35* Invalidate on project version change.
36
37## 7. Testing & Quality
38
39* Mock the LLM API for tests.
40* Track metrics: latency, quality, cost.
41
42## 8. Governance
43
44* Prompt edits require PR + code owner review.
45* Maintain version tags: `v1.0.0-prompts`.
46
47## 9. Security
48
49* No secrets in prompts.
50* Redact user data before API call.
51
52## 10. References
53
54* [Angular AI Developer Guide](https://angular.dev/ai/develop-with-ai)
55* [Web Codegen Scorer Tool](https://angular.dev/ai/develop-with-ai)
56
57Conclusion
Establishing a consistent AGENTS.md policy brings clarity and safety to how AI agents are used in Angular projects.
By treating agents as first-class citizens—subject to versioning, testing, and architectural discipline—you ensure they remain reliable collaborators in your development workflow.