20 Claude Prompts for Coding That Ship Better Code
XML-structured prompts that give Claude the context it needs to generate production-ready code — not toy examples that break the moment you test them.
Code Generation
4 promptsBuild a Feature From a Product Spec
1/20<context> I'm working on a [language/framework] project. The codebase uses [architecture pattern, e.g. MVC, clean architecture]. Here's the relevant existing code structure: [paste file tree or key files] </context> <task> Implement the following feature: [describe the feature in plain English] Requirements: 1. [requirement 1] 2. [requirement 2] 3. [requirement 3] </task> <constraints> - Follow the existing code style and patterns in the codebase - Include error handling for edge cases - No external dependencies unless absolutely necessary - Write code that a mid-level developer can maintain </constraints> <format> Return each file as a separate code block with the file path as the header. Include inline comments only where the logic isn't self-evident. </format>
Generates production-quality feature code that follows your existing codebase patterns.
Pro tip: Paste your actual file tree and 2-3 representative files in <context>. Claude uses up to 200K tokens — more context means more consistent output.
Generate a REST API Endpoint
2/20<context> Stack: [e.g. Node.js/Express, Python/FastAPI, Go/Gin] Database: [e.g. PostgreSQL with Prisma ORM] Auth: [e.g. JWT, session-based, API key] Existing patterns: [paste one existing endpoint for reference] </context> <task> Create a complete REST API endpoint for: - Resource: [e.g. /api/users/:id/settings] - Methods: [GET, PUT, DELETE] - Include: route handler, validation, service layer, database queries </task> <constraints> - Match the existing endpoint patterns exactly - Validate all input (params, body, query strings) - Return consistent error responses with proper HTTP status codes - Include rate limiting considerations in comments </constraints>
Creates a complete API endpoint matching your existing patterns — route, validation, service, and queries.
Pro tip: Paste one existing endpoint as a reference in <context>. Claude will mirror the exact patterns, naming conventions, and error handling style.
Convert Code Between Languages
3/20<context> Source language: [e.g. Python] Target language: [e.g. TypeScript] Target framework: [e.g. Next.js with App Router] </context> <task> Convert the following code to [target language], preserving the exact same logic and behavior: [paste source code] </task> <constraints> - Use idiomatic [target language] patterns — don't write [source language]-style code in [target language] - Preserve all edge case handling - Use the target language's type system fully (no 'any' types in TypeScript) - Flag any features that don't have a direct equivalent and explain the workaround </constraints> <format> Return the converted code, then a "Migration Notes" section listing any behavioral differences or caveats. </format>
Converts code between languages using idiomatic patterns — not a line-by-line translation.
Pro tip: Claude handles large code conversions well. Paste entire files rather than snippets for more accurate translations.
Write a Database Schema and Migrations
4/20<context> Database: [e.g. PostgreSQL] ORM: [e.g. Prisma, Drizzle, SQLAlchemy, raw SQL] Existing schema: [paste current schema or describe existing tables] </context> <task> Design and write the schema/migration for: [describe the data model — entities, relationships, constraints] Include: 1. Table definitions with proper types and constraints 2. Indexes for common query patterns 3. Migration file(s) 4. Seed data for development </task> <constraints> - Use appropriate data types (don't default everything to VARCHAR) - Add foreign key constraints with proper ON DELETE behavior - Include created_at/updated_at timestamps - Consider query performance — add indexes for fields used in WHERE and JOIN - No duplicate data — normalize unless there's a performance reason not to </constraints>
Generates a complete database schema with migrations, indexes, constraints, and seed data.
Pro tip: Ask Claude to generate this as an artifact — you get a clean migration file you can copy directly into your project.
XML tags are just the start. Learn the full Claude workflow.
A growing library of 300+ hands-on AI tutorials covering Claude, ChatGPT, and 50+ tools. New tutorials added every week.
Debugging & Code Review
4 promptsDebug a Failing Function
5/20<context> Language: [language] Environment: [e.g. Node.js 20, Python 3.12, browser] What should happen: [expected behavior] What actually happens: [actual behavior] Error message (if any): [paste error] </context> <task> Debug this code and explain exactly what's wrong: [paste the code] Provide: 1. The root cause (not just the symptom) 2. The fix with explanation 3. How to prevent this class of bug in the future </task> <constraints> - Don't just say "the issue is on line X" — explain WHY it's wrong - If there are multiple issues, list all of them, not just the first one - Show the fixed code as a complete replacement, not just a diff </constraints>
Gets to the root cause of bugs — not just the symptom — with a fix and prevention strategy.
Pro tip: Enable extended thinking for complex bugs. Claude will trace through the execution path step by step before identifying the issue.
Review a Pull Request
6/20<context> Project: [describe project and its purpose] This PR: [describe what this PR does] Team coding standards: [mention any specific standards or link] </context> <task> Review this code change as a senior developer. For each issue found, categorize as: - 🔴 Blocker: Must fix before merge - 🟡 Suggestion: Should fix, not blocking - 💡 Nit: Style preference, take it or leave it Focus on: 1. Logic errors or edge cases 2. Security vulnerabilities 3. Performance issues 4. Maintainability concerns 5. Missing tests or test gaps </task> <constraints> - Be specific — cite the exact line and explain the issue - For each issue, provide the fix, not just the problem - Don't flag style issues that a linter should catch - If the code is good, say so — don't invent problems </constraints> Here's the diff: [paste the diff or code]
Gets a senior-level code review with categorized issues, specific fixes, and no filler feedback.
Pro tip: Claude gives better reviews with more context. Paste the full diff plus any related files the changes interact with.
Explain a Complex Codebase
7/20<task> I'm new to this codebase and need to understand how it works. Explain the following code as if you're onboarding a new team member: [paste the code — can be multiple files] </task> <format> Structure your explanation as: 1. **High-level overview** — What does this code do in one paragraph? 2. **Architecture** — How are the pieces connected? (include a simple ASCII diagram) 3. **Key concepts** — What patterns or abstractions does a reader need to understand? 4. **Data flow** — Walk through a typical request/operation step by step 5. **Gotchas** — What's non-obvious or likely to trip someone up? </format> <constraints> - Assume I know [language] but not this specific codebase - Don't explain basic language features — focus on architecture and design decisions - If something looks like a workaround or tech debt, flag it </constraints>
Gets an onboarding-quality walkthrough of unfamiliar code — architecture, data flow, and gotchas included.
Pro tip: Paste multiple related files at once. Claude handles large contexts and gives better explanations when it can see the full picture.
Find Security Vulnerabilities
8/20<context> Application type: [e.g. web app, API, CLI tool] Framework: [e.g. Express, Django, Rails] Auth method: [e.g. JWT, sessions, OAuth] </context> <task> Audit this code for security vulnerabilities: [paste code] For each vulnerability found: 1. What it is (name the vulnerability type — XSS, SQLi, IDOR, etc.) 2. How it could be exploited (specific attack scenario) 3. The fix (show the corrected code) 4. Severity: Critical / High / Medium / Low </task> <constraints> - Focus on OWASP Top 10 categories - Don't flag theoretical issues that require unlikely conditions — focus on exploitable vulnerabilities - If the code looks secure, say so and explain what it's doing right - Check for: injection, auth bypass, data exposure, CSRF, insecure deserialization </constraints>
A focused security audit with specific exploit scenarios, severity ratings, and fix code.
Pro tip: Enable extended thinking for security audits. Claude will methodically trace attack vectors rather than pattern-matching common issues.
Testing
4 promptsGenerate a Test Suite
9/20<context> Language: [language] Test framework: [e.g. Jest, pytest, Go testing, RSpec] Code to test: [paste the function/module/class] </context> <task> Write a comprehensive test suite covering: 1. Happy path — expected inputs produce expected outputs 2. Edge cases — boundary values, empty inputs, null/undefined 3. Error cases — invalid inputs, network failures, timeouts 4. Integration points — if it calls other services, mock them properly </task> <constraints> - Each test should test ONE thing (clear test name = clear intent) - Use descriptive test names: "should [expected behavior] when [condition]" - Don't test implementation details — test behavior - Group related tests with describe/context blocks - Include setup and teardown where needed </constraints>
Generates a complete test suite with happy paths, edge cases, error handling, and proper mocking.
Pro tip: Paste the actual function plus any types/interfaces it uses. Claude writes better tests when it can see the full contract.
Write End-to-End Tests
10/20<context> App type: [e.g. Next.js web app] E2E framework: [e.g. Playwright, Cypress, Selenium] Feature to test: [describe the user flow] Test environment: [e.g. local dev, staging] </context> <task> Write E2E tests for this user flow: [describe step by step what the user does] Include: 1. Happy path test 2. Error state test (what if something fails?) 3. Edge case test (what if the user does something unexpected?) </task> <constraints> - Use realistic test data, not "test123" - Wait for elements properly — no arbitrary sleep/delays - Tests should be independent (no test depends on another test's state) - Include proper selectors (prefer data-testid over CSS classes) - Clean up test data after each test </constraints>
Creates robust E2E tests with realistic data, proper waits, and independent test isolation.
Pro tip: Describe the user flow in plain English — Claude translates it to framework-specific test code better than if you try to write pseudo-code.
Generate Test Data and Fixtures
11/20<task> Generate test data for the following data model: [paste your type definitions, schema, or describe the data structure] Create: 1. A factory function that generates valid test data with sensible defaults 2. 5 realistic fixture records covering different scenarios (not just random data) 3. Edge case fixtures: empty strings, max-length values, Unicode, special characters 4. Invalid data fixtures for testing validation </task> <constraints> - Use realistic names, emails, dates — not "John Doe" or "[email protected]" - Date values should be relative (e.g. "3 days ago") not hardcoded - Factory function should accept partial overrides for any field - Cover the boundary values for any numeric fields </constraints> <format> Return the factory function first, then the fixture data as named exports. </format>
Generates realistic test data factories and fixtures — not lazy placeholder data.
Pro tip: Claude generates surprisingly realistic test data. For extra variety, ask: "Generate 20 more fixture records with different demographic profiles."
Add Tests to Legacy Code
12/20<context> Language: [language] Test framework: [e.g. Jest, pytest] This code has NO existing tests. It works in production but we need to add a test safety net before refactoring. </context> <task> Create a test strategy and initial test suite for this legacy code: [paste the code] Approach: 1. First, identify what this code actually does (document the current behavior) 2. Write characterization tests that capture the CURRENT behavior (even if it has bugs) 3. Identify the riskiest areas that need tests most urgently 4. Flag any code that's untestable in its current form and suggest minimal refactors to make it testable </task> <constraints> - Don't refactor the production code — only suggest the minimum changes needed for testability - Characterization tests should pass against the current code, not idealized behavior - Mark any tests that capture suspected bugs with a clear comment - Focus on the public API, not internal implementation </constraints>
Creates a test safety net for legacy code — characterization tests first, then targeted coverage.
Pro tip: This is a great use of extended thinking. Claude will analyze the code thoroughly before deciding what to test and in what order.
These prompts give you the what. Tutorials give you the why.
Learn when to use extended thinking, how to build Claude Projects, and workflows that compound. 300+ tutorials and growing.
Architecture & Design
4 promptsDesign a System Architecture
13/20<context> Problem: [what are you building and why] Scale: [expected users/requests/data volume] Team: [team size and experience level] Existing infrastructure: [cloud provider, current stack] Budget constraints: [if any] </context> <task> Design a system architecture that solves this problem: [detailed description of requirements] Include: 1. High-level architecture diagram (ASCII) 2. Component breakdown with responsibilities 3. Data flow for the 3 most common operations 4. Technology choices with reasoning (not just "use Kafka") 5. Trade-offs you're making and why </task> <constraints> - Optimize for the stated scale — don't over-engineer for 10x growth that isn't certain - Every technology choice must have a "why this over alternatives" justification - Address: scalability, reliability, security, and developer experience - Include a "what I'd add at 10x scale" section separately </constraints>
Generates a complete system architecture with diagrams, trade-offs, and justified technology choices.
Pro tip: Enable extended thinking. Architecture decisions benefit from Claude reasoning through trade-offs before committing to a design.
Refactor Plan for Technical Debt
14/20<context> Codebase: [language/framework, approximate size] Pain points: [what's causing problems — slow builds, bugs, hard to add features, etc.] Team capacity: [how much time can you dedicate to refactoring] </context> <task> Create a refactoring plan for this code: [paste the code or describe the architecture] Provide: 1. Assessment: What's the actual problem? (not just "it's messy") 2. Priority-ordered list of refactors with effort estimates (S/M/L) 3. For each refactor: what it fixes, risk level, and how to verify it worked 4. A migration strategy that doesn't require a big-bang rewrite 5. What to leave alone (not everything needs fixing) </task> <constraints> - Each refactor must be deployable independently (no "refactor everything then ship") - Include rollback strategies for risky changes - Estimate effort realistically — account for testing and code review time - Prioritize by: impact on team velocity > code quality > consistency </constraints>
Creates a pragmatic refactoring plan with priorities, risk levels, and incremental migration steps.
Pro tip: Paste the actual problematic code. Claude gives much better refactoring advice with real code than with descriptions of code.
Design a Database Schema From Requirements
15/20<context> Application: [describe the application] Database: [e.g. PostgreSQL, MongoDB, DynamoDB] Expected scale: [number of records, query patterns, read/write ratio] </context> <task> Design a database schema for these requirements: [list requirements — entities, relationships, business rules] Provide: 1. Entity-relationship diagram (ASCII) 2. Table definitions with types, constraints, and indexes 3. Explanation of normalization decisions (when you chose to denormalize and why) 4. Common query patterns and which indexes support them 5. Migration path if requirements change (what's easy to change vs. hard) </task> <constraints> - Optimize for the stated query patterns, not theoretical perfection - Include soft deletes if the business needs audit trails - Use appropriate types (don't make everything a string) - Consider: concurrent writes, eventual consistency needs, data growth </constraints>
Designs a query-optimized database schema with ER diagrams, indexes, and migration flexibility.
Pro tip: Describe your query patterns explicitly — "we query users by email 100x more than by name." Claude optimizes indexes based on actual access patterns.
Evaluate Build vs Buy Decision
16/20<context> Problem: [what we need to solve] Team: [size, skills, bandwidth] Budget: [range for SaaS tools] Timeline: [when do we need this working] Current stack: [what we already use] </context> <task> Evaluate whether we should build this in-house or use an existing solution: For the BUILD option: 1. Estimated effort (dev weeks) 2. Ongoing maintenance cost 3. Risks and unknowns 4. What we gain (customization, control, no vendor lock-in) For the BUY option: 1. Top 3 tool/service recommendations with pricing 2. Integration complexity 3. Limitations and lock-in risks 4. What we lose vs building Recommendation: Which option and why, given our specific context. </task> <constraints> - Be honest about hidden costs on both sides - Factor in opportunity cost (what else could the team build with that time?) - Consider: maintenance burden, hiring needs, vendor reliability - Don't default to "build" just because we're developers </constraints>
Gets an honest build-vs-buy analysis with hidden costs, trade-offs, and a clear recommendation.
Pro tip: Claude gives balanced analysis here — it won't default to "build" like most developer communities. Extended thinking helps weigh the trade-offs.
Documentation
4 promptsWrite API Documentation
17/20<context> API type: [REST, GraphQL, gRPC] Audience: [external developers, internal team, partners] Existing docs style: [if any — paste an example] </context> <task> Write documentation for this API: [paste the API code — routes, controllers, types] Include for each endpoint: 1. Description (what it does and when to use it) 2. Request format (method, URL, headers, body with types) 3. Response format (success and error responses with examples) 4. Authentication requirements 5. Rate limits (if applicable) 6. Code example in [language] showing a real-world usage </task> <constraints> - Examples should show realistic data, not "foo" and "bar" - Document error responses as thoroughly as success responses - Include curl examples for quick testing - Note any non-obvious behavior or gotchas </constraints> <format> Use markdown with clear headers. Each endpoint should be self-contained (no "see above" references). </format>
Generates complete API docs with realistic examples, error responses, and code samples.
Pro tip: Claude generates this as a clean markdown artifact you can drop into your docs site. Ask for specific language examples: "add a Python and JavaScript example for each endpoint."
Write a Technical Design Document
18/20<context> Company: [company/team] Project: [project name] Author: [your name] Reviewers: [who will review this] </context> <task> Write a technical design document for: [describe what you're building and why] Sections: 1. Summary (2-3 sentences — what and why) 2. Background (what problem exists today) 3. Goals and non-goals (explicitly state what's out of scope) 4. Proposed design (with diagrams) 5. Alternatives considered (minimum 2, with reasons for rejection) 6. Rollout plan (how to ship safely) 7. Open questions (what still needs to be decided) </task> <constraints> - Keep it under 2 pages (concise > comprehensive) - Alternatives must be genuinely considered, not strawmen - Include concrete metrics for success - Open questions should have proposed answers, not just questions </constraints>
Produces a concise, reviewable design doc with alternatives, rollout plan, and open questions.
Pro tip: After generating, ask Claude: "Play devil's advocate — what are the strongest arguments against this design?" Claude gives genuine critique, not soft pushback.
Generate README for a Project
19/20<task> Write a README.md for this project: [paste your project structure, main files, or describe the project] Include: 1. Project name and one-line description 2. Quick start (get running in under 2 minutes) 3. Prerequisites (with version numbers) 4. Installation steps (copy-paste ready) 5. Usage examples (3 common use cases) 6. Configuration (environment variables, config files) 7. Contributing guide (brief) 8. License </task> <constraints> - Quick start must actually work — no "configure your environment" handwaving - All code examples must be copy-pasteable (no placeholder paths) - Include a "Troubleshooting" section for common issues - Keep it scannable — use headers, code blocks, and bullet points - No badges unless they provide useful info (build status, coverage) </constraints>
Creates a README that actually helps people get started — not a template with TODOs.
Pro tip: Paste your package.json or requirements.txt so Claude can list exact prerequisites and commands.
Write Inline Code Documentation
20/20<task> Add documentation to this code. NOT line-by-line comments — strategic documentation that helps future developers understand the WHY: [paste code] </task> <constraints> - Only add comments where the code isn't self-explanatory - Document: non-obvious decisions, business logic, workarounds, performance choices - Use the language's standard doc format (JSDoc, docstrings, GoDoc, etc.) - For public APIs: document parameters, return values, and error conditions - Never comment WHAT the code does (the code shows that) — comment WHY - Flag any code that SHOULD have a comment but you can't write one because the intent is unclear </constraints> <format> Return the complete code with documentation added. Mark added documentation with a brief "// ADDED:" prefix on the first comment line so the developer can review additions. </format>
Adds strategic documentation that explains the why — not obvious what-comments on every line.
Pro tip: Claude is excellent at identifying non-obvious code. It flags workarounds, implicit assumptions, and business rules that need documentation.
Frequently Asked Questions
Prompts are the starting line. Tutorials are the finish.
A growing library of 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ AI tools. New tutorials added every week.
14-day free trial. Cancel anytime.