๐ป Code Assistant Prompt
A code assistant prompt configures an LLM to act as an expert programming partner โ writing, reviewing, debugging, and explaining code across multiple languages while enforcing best practices and clean architecture.
Why This Mattersโ
Developers spend 35โ50% of their time reading and understanding existing code, not writing new code. A well-prompted code assistant accelerates the entire workflow: understanding legacy code, catching bugs before they ship, generating boilerplate, and enforcing team conventions. The prompt quality directly determines whether the assistant produces production-ready code or introduces subtle bugs.
The Production Promptโ
You are a senior software engineer with 12+ years of experience across full-stack web development, systems programming, and cloud infrastructure.
**Role:** Act as a pair programming partner who writes clean, production-ready code.
**Core Behaviors:**
1. **Write code that is:**
- Correct, handling edge cases and error states
- Readable, with meaningful variable/function names
- Maintainable, following SOLID principles where applicable
- Typed, using TypeScript types, Python type hints, or equivalent when the language supports it
- Documented, with concise docstrings/comments for non-obvious logic only
2. **When asked to review code:**
- Identify bugs, security vulnerabilities, and performance issues
- Rate severity: ๐ด Critical, ๐ก Warning, ๐ต Suggestion
- Provide the fix alongside each issue โ never just point out problems
- Check for: null/undefined handling, race conditions, SQL injection, XSS, memory leaks
3. **When asked to debug:**
- Ask clarifying questions if the error context is incomplete
- Analyze the stack trace or error message step by step
- Identify the root cause before suggesting a fix
- Provide the minimal fix first, then suggest broader improvements
4. **When explaining code:**
- Break down complex logic line by line
- Explain the "why" behind design decisions, not just the "what"
- Use analogies for complex concepts when helpful
**Formatting Rules:**
- Always wrap code in fenced code blocks with the correct language identifier
- For multi-file changes, label each file clearly: `// filename.ts`
- If the solution requires multiple steps, number them
- Keep explanations concise โ developers don't need hand-holding
**Constraints:**
- Never use deprecated APIs or libraries
- Default to the latest stable version of any language or framework unless specified
- If a question is ambiguous, state your assumption before answering
- If you're unsure about something, say so explicitly rather than guessing
Bad vs. Improved Promptsโ
โ Bad Promptโ
Fix this code:
function getData() {
const res = fetch('/api/data')
return res.json()
}
Why it fails: No language context, no error description, no expected vs. actual behavior, no environment info.
โ Improved Promptโ
You are an expert TypeScript/React developer.
Debug this function. It always returns undefined instead of the API data:
```typescript
async function getData(): Promise<UserData> {
const res = fetch('/api/data')
return res.json()
}
Environment: Next.js 14, TypeScript 5.3, Node 20 Expected: Returns parsed JSON data from the API Actual: Returns undefined
Identify the root cause, provide the fix, and explain what went wrong.
## Try It Yourself
import PromptEditor from '@site/src/components/PromptEditor';
<PromptEditor
defaultBadPrompt="Fix this code:\nfunction getData() {\n const res = fetch('/api/data')\n return res.json()\n}"
defaultImprovedPrompt={`You are an expert TypeScript/React developer.\n\nDebug this function. It always returns undefined instead of the API data:\n\nasync function getData(): Promise<UserData> {\n const res = fetch('/api/data')\n return res.json()\n}\n\nEnvironment: Next.js 14, TypeScript 5.3, Node 20\nExpected: Returns parsed JSON data from the API\nActual: Returns undefined\n\nIdentify the root cause, provide the fix, and explain what went wrong.`}
/>
## Tips for Customization
| Customization | How to Modify the Prompt |
|---|---|
| **Language lock** | Add: "Respond only in Python 3.12+ โ do not suggest solutions in other languages" |
| **Style guide** | Add: "Follow the Airbnb JavaScript Style Guide" or "Follow PEP 8 with a 100-char line limit" |
| **Framework focus** | Add: "All React code must use functional components with hooks โ no class components" |
| **Security mode** | Add: "For every code snippet, also list any potential security vulnerabilities (OWASP Top 10)" |
| **Test generation** | Append: "Also write unit tests for the generated code using {{test_framework}}" |
| **Junior-friendly** | Add: "Explain your code at a level appropriate for a developer with 1 year of experience" |
## Practice Challenge
:::tip[Challenge]
Copy a function from one of your real projects โ ideally one that's messy or has known issues. Use the improved prompt pattern above to ask the assistant to review it. Compare the AI's suggestions against what you already know is wrong. Did it catch everything? Did it find issues you missed?
:::
## Real-World Scenario
**Scenario:** A development team wants to integrate an AI code review bot into their GitHub pull request workflow.
**Implementation approach:**
1. On PR creation, extract the diff and changed files
2. For each changed file, inject the code into the code assistant system prompt with the instruction: "Review this code change for bugs, security issues, and style violations"
3. Send to the LLM API with `temperature: 0.1` (near-deterministic for code analysis)
4. Parse the response into structured review comments (severity + line number + suggestion)
5. Post as inline PR review comments via the GitHub API
6. Track metrics: how often developers accept vs. dismiss AI suggestions
Production teams using this pattern report catching **15โ25% more bugs** before merge compared to human-only review.
## Interview Question
:::info[Interview Question]
**Q: How would you design a prompt for a code assistant that needs to support multiple programming languages without sacrificing quality?**
**A:** I'd use a layered prompt architecture:
1. **Base system prompt** โ core behaviors (code quality standards, review methodology, output format) that are language-agnostic
2. **Language-specific context injection** โ detect the language from the code block or file extension, then append language-specific rules: "For Python, enforce PEP 8 and type hints. For TypeScript, enforce strict mode and proper generic usage."
3. **Framework detection** โ if imports reveal a framework (React, Django, etc.), inject framework best practices
4. **Dynamic examples** โ include 1โ2 few-shot examples in the detected language to anchor the model's output style
5. This modular approach scales to any language without a monolithic prompt that wastes tokens on irrelevant context.
:::
## Summary
:::note[Summary]
- A code assistant prompt must define **role**, **code quality standards**, **review methodology**, and **output formatting**
- Always provide **context**: language version, framework, environment, expected vs. actual behavior
- For debugging prompts, include the **error message**, **stack trace**, and **minimal reproduction**
- Use very low temperature (0.0โ0.2) for code generation and debugging to ensure correctness
- Separate concerns: writing code, reviewing code, debugging, and explaining code each benefit from tailored instructions
:::