Claude Prompt Library

Claude for Engineering: Beyond Autocomplete

20 copy-paste prompts

20 Claude prompts for system design reviews, RFC drafts, architecture decisions, code review, debugging, and technical documentation — the deep work Claude handles better than most AI assistants.

Design & Architecture

5 prompts

System Design Document

1/20

<task>Write a system design document for [system]</task> <requirements> - Functional: [describe] - Scale: [users, requests, data volume] - SLAs: [latency, availability] - Constraints: [budget, team, tech stack] </requirements> <output> 1. Problem statement + goals 2. Non-goals (explicit scope limits) 3. High-level architecture diagram (describe components) 4. Data model 5. API design (core endpoints) 6. Scalability analysis (how it handles 10x, 100x load) 7. Failure modes + mitigations 8. Observability plan (metrics, logs, traces) 9. Security considerations 10. Migration strategy (if replacing existing) 11. Open questions + risks </output>

Writes complete system design documents with non-goals, diagrams, scalability, and observability.

💡

Pro tip: System design docs that skip "non-goals" scope-creep into hell. Explicitly naming what you won't do protects the design from stakeholder pressure. Non-goals are as important as goals.

Architecture Decision Record (ADR)

2/20

<task>Write an ADR for [decision]</task> <context> - Decision: [describe] - Forces: [competing concerns] - Options considered: [list] </context> <output> Standard ADR format: 1. Title + status (proposed/accepted/deprecated) 2. Context (what problem, what forces) 3. Decision (what we chose) 4. Alternatives considered (each option with pros/cons) 5. Consequences (positive + negative) 6. Compliance (what monitoring confirms decision is working) </output>

Writes Architecture Decision Records with full context, alternatives, consequences, and compliance monitoring.

💡

Pro tip: ADRs preserve why decisions were made, not just what. 2 years later, someone will ask "why did we choose X?" Without an ADR, the answer is lost to team rotation. Write them for every significant decision.

API Design Review

3/20

<task>Design a REST API for [feature]</task> <requirements>[describe use cases]</requirements> <output> 1. Resource model (nouns + relationships) 2. Endpoints (method + path + request/response) 3. Authentication + authorization 4. Versioning strategy 5. Pagination, filtering, sorting 6. Error responses (format + codes) 7. Rate limiting 8. Idempotency 9. Example request/response for each endpoint 10. Deprecation strategy for future changes </output>

Designs REST APIs with resource model, auth, versioning, errors, rate limits, and idempotency.

💡

Pro tip: API design debt compounds faster than code debt. Claude is disciplined at REST principles — use it to pressure-test your design before shipping. Bad API decisions live forever.

Database Schema Design

4/20

<task>Design database schema for [domain]</task> <requirements> - Entities: [list] - Relationships: [describe] - Scale: [rows, queries per second] - Database: [PostgreSQL, MySQL, etc.] </requirements> <output> 1. Tables with columns + types + constraints 2. Primary keys + foreign keys 3. Indexes (with justification for each) 4. Normalization level (3NF, denormalized, hybrid) 5. Common query patterns + their performance 6. Migration path for schema changes 7. Sharding / partitioning strategy (if applicable) 8. Soft-delete vs hard-delete policy 9. Audit/history considerations </output>

Designs database schemas with indexes, query patterns, migrations, and sharding strategies.

💡

Pro tip: Most database design mistakes come from not thinking about query patterns early. Design the schema to match the top 10 query patterns, not textbook 3NF. Performance matters more than purity.

Microservices Decomposition

5/20

<task>Decompose this monolith into microservices</task> <current_system>[describe]</current_system> <pain_points>[what's broken today]</pain_points> <output> 1. Service boundaries (by domain, not by tech layer) 2. Data ownership per service 3. Inter-service communication (sync vs async) 4. Shared concerns (auth, logging, etc.) 5. Migration strategy (strangler fig pattern?) 6. Team structure implications (Conway's Law) 7. Failure modes introduced by distribution 8. Cost/benefit honest assessment 9. When NOT to decompose </output>

Decomposes monoliths into services with domain boundaries, data ownership, and honest cost/benefit analysis.

💡

Pro tip: Most microservices migrations are premature. Claude is honest about this — if your monolith works, fixing it often beats distributed system complexity. Always ask "what problem does this actually solve?" before decomposing.

XML tags are just the start. Learn the full Claude workflow.

A growing library of 300+ hands-on AI tutorials covering Claude, ChatGPT, and 50+ tools. New tutorials added every week.

Start 7-Day Free Trial

Code Review & Refactoring

5 prompts

Code Review (Thorough)

6/20

<task>Review this code as a senior engineer</task> <code>[paste]</code> <context>[what it does, who uses it]</context> <output> 1. Bugs (correctness issues) 2. Security issues (injections, auth bypasses, data leaks) 3. Performance concerns (N+1 queries, unbounded loops) 4. Readability / maintainability 5. Test coverage gaps 6. API design (if public interface) 7. Edge cases not handled 8. Would I approve this PR (yes/changes/no) 9. Nits (style, minor suggestions) </output> <constraints>Point to specific lines. Don't rewrite — suggest.</constraints>

Runs thorough code reviews flagging bugs, security, performance, edge cases, and approval decision.

💡

Pro tip: Claude is an especially good code reviewer because it's patient. Human reviewers miss things under time pressure; Claude doesn't get tired. Use it as a first-pass reviewer before senior-engineer time.

Refactor Plan

7/20

<task>Plan a refactor of [component]</task> <current_code>[paste or describe]</current_code> <goal>[what should be better after refactor]</goal> <output> 1. Current pain points (specific) 2. Target state (what good looks like) 3. Refactor approach (incremental steps vs big-bang) 4. Test strategy (how to ensure behavior preserved) 5. Ordered sequence of changes 6. Risks per step 7. Rollback plan 8. How to demonstrate value to stakeholders </output>

Plans refactors with pain-point identification, ordered steps, test strategy, and rollback.

💡

Pro tip: Refactors fail when they're big-bang. Incremental refactors with passing tests at every step are ~100× safer. Claude helps decompose big refactors into small steps with clear invariants.

Legacy Code Comprehension

8/20

<task>Help me understand this legacy code</task> <code>[paste]</code> <context>[what it's supposed to do, if known]</context> <output> 1. Plain-English explanation of what it does 2. Key design decisions visible 3. Assumptions the code makes 4. Places where behavior is surprising or undocumented 5. Dependencies (internal + external) 6. What to be careful modifying 7. Tests that should exist but probably don't </output>

Explains legacy code with plain English, design decisions, assumptions, and modification risks.

💡

Pro tip: Legacy code often has undocumented invariants that seem dumb but are critical. Claude is cautious about suggesting modifications until it understands the assumptions. Better than human devs who "just clean it up" and break prod.

Code Smell Hunt

9/20

<task>Identify code smells in this codebase</task> <code_samples>[paste several files]</code_samples> <output> Common smells to check: - Long methods / functions - Duplicate code - Feature envy - God classes - Primitive obsession - Data clumps - Inappropriate intimacy - Middle-man classes - Conditional complexity Per smell found: location, severity, refactor suggestion, priority.

Hunts code smells with severity, locations, refactor suggestions, and prioritization.

💡

Pro tip: Not all code smells need fixing. Some live happily for years. Claude helps prioritize — which smells are causing real pain (bugs, slow development) vs which are just stylistic preferences.

Security Code Review

10/20

<task>Security-review this code</task> <code>[paste]</code> <context>[what it does, auth model]</context> <output> OWASP Top 10 checks: - Injection (SQL, command, XSS) - Broken authentication - Sensitive data exposure - XXE - Broken access control - Security misconfiguration - Insecure deserialization - Vulnerable components - Insufficient logging Per issue: severity, specific location, example exploit, remediation.

Reviews code for OWASP Top 10 with severity, exploits, and remediations per finding.

💡

Pro tip: Security reviews by AI are a first-pass filter. Claude catches obvious vulnerabilities (unparameterized SQL, exposed secrets) but shouldn't replace security experts for sensitive systems. Use it to clean up low-hanging fruit before paid audits.

Debugging & Troubleshooting

5 prompts

Bug Hypothesis Generator

11/20

<task>Help me debug this issue</task> <symptom>[what's happening — error, bad output, crash]</symptom> <context>[when it started, what changed recently]</context> <code>[paste relevant code]</code> <logs>[paste error logs / stack traces]</logs> <output> 1. Top 5 hypotheses for the cause (ranked by likelihood) 2. For each: how to verify/eliminate 3. Additional data I should gather 4. Quick-win checks first (cheap to verify) 5. What's most likely if nothing else narrows it down </output>

Generates ranked debugging hypotheses with verification steps, data gathering, and quick wins.

💡

Pro tip: Debugging is hypothesis-driven. Random changes "to see what happens" waste hours. A structured hypothesis list forces methodical elimination. Claude thinks like a good debugger — systematic, skeptical, patient.

Stack Trace Interpretation

12/20

<task>Interpret this stack trace and suggest fixes</task> <stack_trace>[paste]</stack_trace> <language_framework>[specify]</language_framework> <output> 1. What the error means in plain English 2. Where exactly the error originates (root frame) 3. How the call chain reached that point 4. Likely root cause (not just symptom) 5. Common causes of this error 6. Immediate fixes 7. Deeper fixes to prevent recurrence 8. How to write a test that catches this </output>

Interprets stack traces with root cause, immediate fixes, deeper fixes, and regression test design.

💡

Pro tip: Stack traces tell you where, not why. Claude is good at bridging the gap — interpreting the trace + suggesting the actual cause + writing the regression test. Use it on any non-obvious error.

Performance Profiling Review

13/20

<task>Analyze this performance profile</task> <profile_data>[paste or describe hot spots]</profile_data> <context>[what the code does, performance target]</context> <output> 1. Top bottlenecks ranked by impact 2. Per bottleneck: why it's slow, fix options 3. Quick wins (low effort, high impact) 4. Deeper architectural fixes 5. Where micro-optimization is pointless 6. Measurement plan to verify improvements </output>

Reviews performance profiles with ranked bottlenecks, fixes, and measurement plans.

💡

Pro tip: Most performance work is waste — optimizing code that doesn't matter to overall latency. Claude helps focus on real bottlenecks instead of favorites. Profile first, then optimize what moves the needle.

Flaky Test Investigation

14/20

<task>Help me diagnose a flaky test</task> <test>[paste or describe]</test> <symptoms>[how often it fails, when, on what]</symptoms> <output> 1. Common causes of flaky tests (race conditions, time dependencies, shared state, external services, ordering) 2. Which causes fit this test's pattern 3. How to reproduce the flake deterministically 4. Specific fixes 5. If test should be deleted (some flaky tests are truly un-fixable) 6. How to prevent similar flakes in the codebase </output>

Diagnoses flaky tests with cause analysis, reproduction, fixes, and prevention strategies.

💡

Pro tip: Flaky tests corrupt CI trust. Teams ignore failures, then miss real bugs. Invest in fixing flakes before they multiply. Claude's list of common flake causes is usually where 90% of fixes live.

Production Incident Postmortem

15/20

<task>Write a blameless postmortem</task> <incident> - What happened: [describe] - Impact: [users affected, duration] - Timeline: [key events] - Resolution: [what fixed it] </incident> <output> 1. TL;DR 2. Timeline (UTC) 3. Impact (specific) 4. Root cause (technical + systemic) 5. What went well (detection, response) 6. What went wrong (delays, gaps) 7. Action items (owner + date) 8. Lessons for similar incidents Tone: blameless, factual, focused on systems not individuals.

Writes blameless postmortems with timeline, root cause, action items, and system-focused learning.

💡

Pro tip: Blameless postmortems require discipline. "User error" hides system failures that enabled the error. Good postmortems ask "what made this easy to do wrong?" — that's where real fixes live.

Documentation & Process

5 prompts

README Writer

16/20

<task>Write a great README for this project</task> <project>[paste code / describe]</project> <audience>[who uses this — team, public, newcomers]</audience> <output> - Project name + tagline - Why this exists (problem it solves) - Quick start (3-5 commands) - Usage examples (basic + advanced) - API / configuration reference - Architecture overview - Contributing guide (if open source) - License - Links to deeper docs Optimize for: new person can be productive in 10 min.

Writes READMEs with quickstart, examples, architecture, and contributing guide.

💡

Pro tip: Most READMEs fail because authors write from their own knowledge level. Great READMEs assume new reader + 10 minutes. Test by giving to a new team member and timing how long before they're productive.

RFC / Design Proposal

17/20

<task>Write an RFC for [proposal]</task> <context>[problem we're solving]</context> <proposal>[describe]</proposal> <output> Standard RFC format: 1. Summary (1 paragraph) 2. Motivation (why we need this) 3. Guide-level explanation (how it looks to users) 4. Reference-level explanation (technical details) 5. Drawbacks 6. Rationale + alternatives considered 7. Prior art (how others solved this) 8. Unresolved questions 9. Future possibilities Tone: persuasive but honest about tradeoffs.

Writes RFCs in Rust-style format with guide + reference explanation, drawbacks, and alternatives.

💡

Pro tip: Good RFCs invite dissent. Weak RFCs sell. Include a real "drawbacks" section — reviewers respect proposals that acknowledge costs honestly. Hiding tradeoffs signals weak thinking.

Runbook / Playbook

18/20

<task>Write a runbook for [operation]</task> <operation>[describe — incident response, deployment, rollback, etc.]</operation> <output> 1. When to use this runbook 2. Prerequisites (access, tools) 3. Step-by-step actions (literal commands or UI steps) 4. Expected outcome per step 5. What to do if a step fails 6. Escalation path 7. Post-action verification 8. Links to relevant logs / dashboards Written so that an on-caller at 3am can follow without cognitive load.

Writes runbooks with literal steps, failure handling, and on-call-at-3am readability.

💡

Pro tip: Runbooks should be usable by someone sleep-deprived at 3am. No "you probably know how to do X" — spell it out literally. Test runbooks by giving them to someone who's never done the task.

API Documentation

19/20

<task>Document this API endpoint</task> <endpoint>[method + path]</endpoint> <code>[paste handler code]</code> <output> - Overview - Request format (path params, query params, headers, body with schema) - Response format (success + error cases with schema) - Example request (curl) - Example response - Error codes and meanings - Rate limits - Authentication requirements - Related endpoints - Common pitfalls </output>

Documents API endpoints with schemas, examples, errors, auth, and common pitfalls.

💡

Pro tip: API docs win when they include curl examples. Theoretical specs without working examples waste developer time. Always include copy-paste examples — they're the #1 thing API consumers actually use.

Onboarding Doc for New Engineer

20/20

<task>Build an onboarding doc for a new engineer joining [team]</task> <team>[describe — domain, stack, processes]</team> <output> - Team context + mission - Key stakeholders - Tech stack overview (what + why we chose it) - Codebase tour (key directories, critical services) - Local dev setup (step-by-step) - How we work (code review, deploys, on-call) - First week tasks (realistic, PR-able) - First month goals - Where to get help - Glossary of team-specific terms </output>

Builds engineer onboarding docs with stack overview, first-week tasks, and team glossary.

💡

Pro tip: Great onboarding docs answer the questions new engineers are embarrassed to ask. Include the "dumb" questions (what does X acronym mean?) — everyone has them, few will ask.

Frequently Asked Questions

Different tools, different strengths. Copilot excels at inline completion while coding. ChatGPT is fast for quick Q&A. Claude is best for deep analysis — long code review, system design docs, architecture decisions, and nuanced refactoring. Most engineers use all three: Copilot for flow, ChatGPT for speed, Claude for depth.
Claude can draft production-quality code for well-specified tasks, but production-ready requires your team's standards: tests, observability, security review, performance tuning, and code review. Use Claude to accelerate drafting and initial review; always human-review AI-generated code before production deployment. The bug you don't catch costs more than the time saved.
Claude's 200K token context window (with extended context available) handles codebases of 50-100K lines for analysis — not full writing. For large codebase work, share relevant slices with clear context about the rest. Claude is particularly good at multi-file analysis within its context window.
Depends on your tier. Consumer Claude subscriptions don't guarantee data isn't used for training. Anthropic's API with opt-out or enterprise agreements provide stronger guarantees. For sensitive code, use enterprise tiers or anonymize before pasting. Check your company's AI policy before feeding proprietary code.
Recent libraries and frameworks — Claude's knowledge cutoff means brand-new APIs, libraries released this month, or language features from the latest release may be unknown or incorrectly documented. When working with cutting-edge tech, paste current docs into the prompt. Claude adapts quickly to context you provide.

Prompts are the starting line. Tutorials are the finish.

A growing library of 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ AI tools. New tutorials added every week.

7-day free trial. Cancel anytime.