Claude Prompt Library

100 Claude-Native Coding Prompts

100 copy-paste prompts

Code reviews. Refactoring. API design. Testing. Debugging. Built for how Claude actually writes code.

Code Reviews

10 prompts

General Code Review

1/100

<context> Language: [LANGUAGE] Framework: [FRAMEWORK] Project type: [e.g. web app, API, CLI tool, library] Team standards: [any coding standards or conventions your team follows] </context> <task> Review the following code as a senior developer: [PASTE CODE HERE] For each issue: 1. The exact location (file/function/line) 2. What the issue is and why it matters 3. The suggested fix with corrected code 4. Severity: Blocker / Major / Minor / Nit </task> <constraints> - Evaluate: correctness, readability, maintainability, testability, adherence to team standards - Prioritize issues that could cause bugs in production over style preferences - If the code is well-written, acknowledge what it does right - Don't flag issues that an auto-formatter or linter would catch - Limit to the 10 most impactful findings </constraints> <output_format> Group findings by severity. End with a summary: "Approve", "Approve with suggestions", or "Request changes". </output_format>

Gets a structured senior-level code review with severity-ranked findings and actionable fixes.

💡

Pro tip: Add your style guide to a Claude Project. Claude will enforce your team conventions automatically across every review.

Security-Focused Review

2/100

<context> Application type: [e.g. SaaS web app, public API, internal tool] Framework: [FRAMEWORK] Auth mechanism: [e.g. JWT, session cookies, OAuth2, API keys] Data sensitivity: [e.g. handles PII, financial data, health records] </context> <task> Perform a security-focused code review: [PASTE CODE HERE] For each vulnerability: 1. Vulnerability type (e.g. XSS, SQLi, IDOR, SSRF) 2. OWASP Top 10 category 3. A concrete exploit scenario 4. Severity: Critical / High / Medium / Low 5. The fix with corrected code </task> <constraints> - Focus on exploitable vulnerabilities, not theoretical risks - Check for: injection, broken auth, sensitive data exposure, CSRF, insecure deserialization, SSRF, path traversal - Verify input validation on all external inputs - Check that secrets are not hardcoded or logged </constraints>

A focused security audit mapped to OWASP Top 10, with exploit scenarios and severity ratings.

💡

Pro tip: Enable extended thinking for security reviews. Claude traces each input through the code to find injection points.

Performance Review

3/100

<context> Language: [LANGUAGE] Runtime: [e.g. Node.js 20, Python 3.12, JVM 21] Scale: [e.g. handles 1K req/s, processes 10M records] Known bottlenecks: [any performance issues already identified] </context> <task> Review this code for performance issues: [PASTE CODE HERE] For each issue: 1. What the performance problem is 2. Why it matters at the stated scale 3. The estimated impact (e.g. O(n²) becomes O(n)) 4. The optimized code </task> <constraints> - Focus on issues that matter at the stated scale - Check for: N+1 queries, unnecessary allocations, blocking I/O, missing indexes, unbounded data structures - Distinguish between "measurable impact" and "technically suboptimal but irrelevant" - If suggesting caching, specify the invalidation strategy - Don't sacrifice readability for marginal gains </constraints>

Finds performance bottlenecks that matter at your scale, with Big-O analysis and optimized code.

💡

Pro tip: Include your expected traffic or data volume. Claude prioritizes differently for 100 users vs. 100K users.

SOLID Principles Review

4/100

<context> Language: [LANGUAGE] Project architecture: [e.g. clean architecture, hexagonal, MVC] Module purpose: [describe what this module/class is responsible for] </context> <task> Review this code for SOLID principle violations: [PASTE CODE HERE] For each violation: 1. Which principle is violated and why 2. The concrete consequence 3. A refactored version 4. Whether fixing it is worth the added complexity </task> <constraints> - Be pragmatic — flag violations that cause real maintenance pain - Show before/after code for each refactoring - If the code intentionally trades SOLID purity for simplicity, acknowledge that - Focus on the top 3 most impactful violations </constraints>

Identifies SOLID violations that cause real maintenance problems, with pragmatic refactoring suggestions.

💡

Pro tip: Paste multiple related classes together. SOLID violations show up in how classes interact, not in isolation.

Review Pull Request Diff

5/100

<context> Repository: [REPO NAME AND PURPOSE] Base branch: [e.g. main] PR description: [PASTE PR DESCRIPTION] Related files not in diff: [paste any types or interfaces the diff references] </context> <task> Review this pull request diff: [PASTE GIT DIFF HERE] Evaluate: 1. Does this change do what the PR description claims? 2. Are there logic errors, edge cases, or regressions? 3. Is the change complete — missing tests, docs, migrations? 4. Dead code, debugging artifacts, or TODOs left behind? Categorize: BLOCKER / WARNING / NIT </task> <constraints> - Review only what changed — don't critique existing code outside the diff - Flag any backward compatibility breakage - Check that error paths are tested - If the diff is clean and correct, say so concisely </constraints>

Reviews a PR diff for correctness, completeness, and regressions with severity-tagged findings.

💡

Pro tip: Paste the full git diff plus referenced files. Claude catches integration bugs when it sees both sides of an interface.

Error Handling Review

6/100

<context> Language: [LANGUAGE] Framework: [FRAMEWORK] Error strategy: [e.g. exceptions, Result types, Go-style error returns] Logging: [e.g. Sentry, Datadog, structured logging] </context> <task> Review this code for error handling completeness: [PASTE CODE HERE] Check for: 1. Unhandled exceptions or rejected promises 2. Swallowed errors (catch blocks that silently fail) 3. Generic catch-all blocks that hide the real error 4. Missing validation on external inputs 5. Error messages that leak implementation details 6. Missing cleanup/rollback on failure </task> <constraints> - For each gap, show the specific failure scenario - Provide corrected code with proper error handling - Don't wrap every line in try/catch — identify where errors need to be caught vs. propagated - Verify async error paths are handled </constraints>

Finds every error handling gap with specific failure scenarios and production-safe fixes.

💡

Pro tip: Paste the code along with its caller. Error handling bugs often happen at function boundaries.

Concurrency Review

7/100

<context> Language: [LANGUAGE] Concurrency model: [e.g. threads, async/await, goroutines, actors] Shared state: [describe shared resources — databases, caches, in-memory state] Scale: [e.g. 50 concurrent requests, 8 worker threads] </context> <task> Review this code for race conditions and concurrency bugs: [PASTE CODE HERE] For each issue: 1. Describe the exact race condition 2. Explain the production symptoms 3. Provide the fix with corrected code 4. Explain why the fix works </task> <constraints> - Check for: TOCTOU, lost updates, deadlocks, starvation, non-atomic compound operations - Verify database transaction isolation levels - Check that shared mutable state is protected - Don't flag single-threaded code as having race conditions </constraints>

Detects race conditions with step-by-step exploit scenarios and thread-safe fixes.

💡

Pro tip: Enable extended thinking for concurrency reviews. Claude simulates interleaved execution paths to find timing bugs.

Accessibility Review

8/100

<context> Framework: [e.g. React, Vue, Svelte, plain HTML] Component type: [e.g. form, modal, navigation, data table] Target compliance: [e.g. WCAG 2.1 AA, Section 508] </context> <task> Review this frontend code for accessibility issues: [PASTE COMPONENT CODE HERE] For each issue: 1. The WCAG success criterion violated 2. Who is affected 3. The fix with corrected code 4. How to manually test the fix </task> <constraints> - Check for: missing alt text, form labels, ARIA attributes, keyboard navigation, focus management, color contrast - Use semantic HTML first, ARIA as a supplement - Include screen reader testing instructions </constraints>

Finds WCAG violations with affected users and screen-reader-tested fixes.

💡

Pro tip: Ask Claude to generate the review as an artifact for a structured audit report you can add to your issue tracker.

Database Query Review

9/100

<context> Database: [e.g. PostgreSQL 16, MySQL 8] ORM: [e.g. Prisma, SQLAlchemy, raw SQL] Table sizes: [e.g. users: 500K rows, orders: 10M rows] Current indexes: [paste indexes or "unknown"] </context> <task> Review these database queries for correctness and performance: [PASTE QUERIES OR ORM CODE HERE] For each query: 1. Is it correct? 2. What's the likely execution plan? 3. Will it perform well at stated table sizes? 4. Missing indexes? 5. Can it be rewritten for better performance? </task> <constraints> - Show the optimized query alongside the original - Specify exact CREATE INDEX statements - Flag N+1 patterns in ORM code - Check for: unbounded queries, unnecessary JOINs, SELECT * - Consider write performance impact of new indexes </constraints>

Reviews SQL and ORM queries for correctness, missing indexes, and performance at your data scale.

💡

Pro tip: Include table sizes and read/write ratio. Claude adjusts recommendations based on your actual data scale.

Code Smells Review

10/100

<context> Language: [LANGUAGE] Project age: [e.g. 6 months, 5-year legacy codebase] Team size: [e.g. 3 developers] Module purpose: [what this code does] </context> <task> Review this code for code smells: [PASTE CODE HERE] For each smell: 1. Name the smell (e.g. Feature Envy, Shotgun Surgery, Primitive Obsession) 2. Where it appears 3. What maintenance problem it causes 4. A refactored version 5. Effort estimate: Quick Fix / Moderate / Major Refactor </task> <constraints> - Focus on smells that hurt team velocity - Prioritize by: bugs, slow onboarding, merge conflicts - Don't suggest refactoring stable, rarely-changed code - Limit to the top 5 most impactful smells </constraints>

Identifies code smells that slow your team down, with effort-ranked refactoring suggestions.

💡

Pro tip: Describe how often this code changes. Claude skips low-ROI suggestions for stable code.

XML tags are just the start. Learn the full Claude workflow.

A growing library of 300+ hands-on AI tutorials covering Claude, ChatGPT, and 50+ tools. New tutorials added every week.

Start 14-Day Free Trial

Refactoring

10 prompts

Extract Functions

11/100

<context> Language: [LANGUAGE] Function name: [FUNCTION NAME] Current length: [e.g. 200 lines] What it does: [brief description] </context> <task> Refactor this function by extracting well-named helper functions: [PASTE CODE HERE] For each extraction: 1. The proposed function name and signature 2. What it encapsulates and why it belongs together 3. The full extracted function 4. The updated caller code </task> <constraints> - Each extracted function should have a single responsibility - Function names should read like documentation — no abbreviations - Don't extract logic that is only used once if it doesn't improve clarity - Preserve all existing behavior exactly - Keep the public API identical </constraints> <output_format> Show the complete refactored version, not just diffs. </output_format>

Breaks a long function into well-named helpers with clear single responsibilities.

💡

Pro tip: Add your naming conventions to a Claude Project so extracted functions match your codebase style.

Reduce Cyclomatic Complexity

12/100

<context> Language: [LANGUAGE] Current complexity score: [if known, e.g. 24] Maximum acceptable complexity: [e.g. 10] </context> <task> Reduce the cyclomatic complexity of this function: [PASTE CODE HERE] For each technique applied: 1. What technique (early return, strategy pattern, lookup table, polymorphism, guard clauses) 2. Why it reduces complexity 3. The refactored code </task> <constraints> - Preserve all existing behavior and edge cases - Don't increase the number of classes unless complexity genuinely warrants it - Prefer early returns and guard clauses over nested conditionals - If a lookup table can replace a switch, use it - Show the before/after complexity score if estimable </constraints>

Reduces deeply nested conditionals and switch statements using guard clauses, lookup tables, and polymorphism.

💡

Pro tip: Enable extended thinking so Claude can explore multiple restructuring approaches before committing to one.

Convert Callbacks to Async/Await

13/100

<context> Language: [JavaScript / TypeScript / Node.js version] Libraries used: [e.g. fs, http, third-party SDKs with callback APIs] Error handling convention: [e.g. Node error-first callbacks, custom error classes] </context> <task> Convert this callback-based code to async/await: [PASTE CODE HERE] For the refactored version: 1. Wrap callback APIs with promisify or manual Promise wrappers 2. Use async/await throughout 3. Handle errors with try/catch 4. Preserve all error cases from the original </task> <constraints> - Use util.promisify where applicable - Maintain the same function signatures where possible - Handle parallel operations with Promise.all, not sequential awaits - Don't lose error context during the conversion - Add TypeScript types if the project uses TypeScript </constraints>

Modernizes callback-based async code to clean async/await with proper error handling.

💡

Pro tip: Paste related utility functions alongside the main code. Claude will promisify them consistently.

DRY Up Repeated Code

14/100

<context> Language: [LANGUAGE] Files affected: [list files containing the duplication] How many times is the pattern repeated: [e.g. 6 times across 4 files] </context> <task> Identify and eliminate duplication in this code: [PASTE CODE HERE] For each duplication pattern: 1. What is repeated and where 2. The abstraction that eliminates it (utility function, base class, HOC, mixin, generic) 3. The shared implementation 4. Updated call sites </task> <constraints> - Only abstract if the duplicated code truly does the same thing — don't create premature abstractions - The abstraction must be simpler to use than the original duplication - Keep the abstraction in the file/module closest to all its consumers - Update all call sites in the provided code </constraints>

Identifies duplicated patterns and creates clean abstractions that eliminate repetition without over-engineering.

💡

Pro tip: Paste all files with the duplication together. Claude finds patterns across files that single-file review misses.

Simplify Nested Conditionals

15/100

<context> Language: [LANGUAGE] Business rules: [describe what the conditionals are implementing] </context> <task> Simplify this deeply nested conditional logic: [PASTE CODE HERE] Apply: 1. Guard clauses to eliminate nesting levels 2. De Morgan's laws to simplify boolean expressions 3. Consolidated conditions where branches do the same thing 4. Named boolean variables to explain compound conditions </task> <constraints> - Preserve every business rule exactly — do not change behavior - Each guard clause should have an explanatory comment if the condition isn't self-evident - Show the nesting depth before and after - If a truth table would help explain the logic, include one </constraints>

Flattens deeply nested if/else trees into readable guard clauses without changing business logic.

💡

Pro tip: Describe the business rules in plain English. Claude validates the refactored logic against your intent.

Decompose God Class

16/100

<context> Language: [LANGUAGE] Class name: [CLASS NAME] Current line count: [e.g. 1,200 lines] How the class is used: [describe its main consumers] </context> <task> Decompose this god class into focused, cohesive classes: [PASTE CLASS CODE HERE] Propose: 1. A decomposition plan — which responsibilities belong together 2. The new class names and their responsibilities 3. The full implementation of each new class 4. How the original consumers should be updated </task> <constraints> - Each new class should have one reason to change - Maintain backward compatibility through a facade if needed - Don't just shuffle methods — identify true cohesion boundaries - Show the dependency graph between new classes </constraints>

Splits an oversized class into cohesive single-responsibility classes with a clear migration path.

💡

Pro tip: Enable extended thinking so Claude can analyze method cohesion before proposing the decomposition plan.

Imperative to Declarative

17/100

<context> Language: [LANGUAGE] Runtime: [e.g. Node.js, browser, Python 3.12] Performance constraints: [e.g. called 10K times/sec, memory-sensitive] </context> <task> Rewrite this imperative code in a declarative style: [PASTE CODE HERE] For the transformation: 1. Replace loops with map/filter/reduce where clearer 2. Replace mutation with immutable transformations 3. Extract predicates as named functions 4. Use method chaining where it reads naturally </task> <constraints> - Don't sacrifice performance unnecessarily — note any tradeoffs - Declarative doesn't mean "one-liner" — readable chains are fine - Keep intermediate variable names descriptive - If the imperative version is actually clearer for a given section, keep it </constraints>

Transforms imperative loops and mutations into readable declarative pipelines.

💡

Pro tip: Specify your runtime. Claude avoids declarative patterns that create excess garbage collection pressure in tight loops.

Improve Naming

18/100

<context> Language: [LANGUAGE] Domain: [e.g. e-commerce, healthcare, fintech] Naming convention: [e.g. camelCase, snake_case, Hungarian notation — whatever you use] </context> <task> Improve the naming throughout this code: [PASTE CODE HERE] For each rename: 1. Old name → new name 2. Why the new name is clearer 3. What concept or intent it better expresses </task> <constraints> - Names should express intent, not implementation - Avoid: abbreviations, single letters (except loop counters), generic names (data, info, temp, obj) - Boolean names should be predicates: isLoaded, hasPermission, canRetry - Collections should be plural nouns - Functions should be verb phrases - List ALL renames — provide the full refactored code at the end </constraints>

Renames variables, functions, and classes to express intent clearly using domain language.

💡

Pro tip: Include your domain glossary in a Claude Project. Claude uses your exact business terms instead of generic ones.

Remove Dead Code

19/100

<context> Language: [LANGUAGE] Version control available: [yes/no — if yes, note that removed code can be recovered] Is this code exported/public: [yes/no] </context> <task> Identify and remove dead code from this codebase: [PASTE CODE HERE] For each piece of dead code: 1. What it is (unused function, unreachable branch, commented-out block, obsolete feature flag) 2. Why you're confident it's dead 3. Any risk in removing it 4. The cleaned-up version </task> <constraints> - Don't remove code that's dead in this file but exported and potentially used elsewhere - Flag code that looks dead but might be called via reflection or dynamic dispatch - Commented-out code should be removed, not kept "just in case" - Feature flags that are permanently enabled can be inlined </constraints>

Safely identifies and removes dead code — unused functions, unreachable branches, and obsolete flags.

💡

Pro tip: Paste your entry points alongside the dead code. Claude traces reachability from actual callers.

Introduce Design Pattern

20/100

<context> Language: [LANGUAGE] Problem you're solving: [describe the specific pain point — e.g. "adding a new payment provider requires changing 5 files"] Constraints: [e.g. can't change the public API, must remain framework-agnostic] </context> <task> Suggest and implement the most appropriate design pattern for this code: [PASTE CODE HERE] Provide: 1. Which pattern fits and why (explain the match between pattern and problem) 2. Alternative patterns considered and why you rejected them 3. Full implementation of the pattern applied to this code 4. How to add new variants without modifying existing code </task> <constraints> - Don't apply a pattern just because it fits — show the concrete benefit - The implementation must work with the existing code structure - Include a brief explanation of the pattern for teammates who may not know it </constraints>

Identifies the right design pattern for your specific problem and implements it in your actual code.

💡

Pro tip: Describe your pain point, not the pattern you think you need. Claude often finds a simpler solution.

API Design

10 prompts

Design REST API

21/100

<context> Domain: [e.g. e-commerce, project management, healthcare] Resources: [list the main entities — e.g. users, orders, products] Consumers: [e.g. mobile app, third-party developers, internal frontend] Auth: [e.g. JWT, API keys, OAuth2] </context> <task> Design a RESTful API for this domain: [DESCRIBE THE FEATURE OR SYSTEM] Provide: 1. Resource hierarchy and URL structure 2. HTTP methods and status codes for each endpoint 3. Request/response schemas (JSON) 4. Authentication and authorization model 5. Filtering, sorting, and pagination conventions </task> <constraints> - Follow REST conventions strictly: proper HTTP verbs, meaningful status codes, resource-oriented URLs - Use plural nouns for collections - Avoid verbs in URLs - Version from day one: /v1/ - Design for the consumer, not the database schema </constraints>

Designs a complete REST API with resource hierarchy, schemas, auth model, and pagination conventions.

💡

Pro tip: Ask Claude to generate the design as an OpenAPI artifact you can import directly into Postman or Stoplight.

Design GraphQL Schema

22/100

<context> Domain: [e.g. social platform, SaaS dashboard] Main entities: [list types — e.g. User, Post, Comment, Organization] Primary consumers: [e.g. React web app, mobile client] Auth model: [e.g. JWT with role-based access] </context> <task> Design a GraphQL schema for this domain: [DESCRIBE THE PRODUCT OR FEATURE] Provide: 1. Type definitions for all entities 2. Query root fields with arguments 3. Mutation definitions with input types 4. Subscription definitions (if real-time is needed) 5. Pagination approach (cursor vs offset) 6. Error handling strategy </task> <constraints> - Design for the client's data needs, not the server's data model - Use connections pattern for paginated lists - Input types should be separate from output types - Avoid over-fetching — design fields the client actually needs - Use enums for fixed value sets </constraints>

Produces a complete GraphQL schema with types, queries, mutations, and client-optimized field design.

💡

Pro tip: Describe a specific UI screen you need to power. Claude designs the schema to minimize round trips for that view.

API Error Handling

23/100

<context> API type: [REST / GraphQL] Framework: [e.g. Express, FastAPI, NestJS] Consumers: [e.g. mobile clients, third-party integrators] Existing error format: [paste current error response if any] </context> <task> Design a comprehensive error handling strategy for this API: [DESCRIBE YOUR API OR PASTE EXISTING CODE] Define: 1. Error response schema (error code, message, details, request ID) 2. Error code taxonomy (validation, auth, not found, rate limit, server error) 3. HTTP status code mapping 4. How validation errors surface field-level details 5. How to distinguish client errors from server errors 6. Error logging strategy (what to log vs. what to return) </task> <constraints> - Error messages for clients must never expose stack traces or internal details - Every error must have a stable machine-readable code, not just an HTTP status - Validation errors must identify which field failed and why - Include example responses for each error category </constraints>

Designs a complete API error taxonomy with stable error codes, field-level validation, and safe error messages.

💡

Pro tip: Add your existing error format and Claude will extend it consistently rather than replacing what you have.

API Versioning Strategy

24/100

<context> API type: [REST / GraphQL] Current state: [e.g. no versioning yet, on v1, breaking changes needed] Consumer types: [e.g. mobile apps with slow update cycles, third-party integrators, internal frontend] Breaking change: [describe what needs to change] </context> <task> Design an API versioning strategy for this situation: [DESCRIBE YOUR API AND THE CHANGES NEEDED] Cover: 1. Versioning mechanism (URL path, header, query param — recommend with rationale) 2. Deprecation policy and timeline 3. How to maintain multiple versions without code explosion 4. Migration guide for consumers 5. Sunset header implementation </task> <constraints> - Recommend based on consumer type — mobile apps need longer deprecation windows than internal frontends - Show concrete implementation code for the chosen mechanism - Define what constitutes a "breaking change" for your API - Include a deprecation notice template </constraints>

Designs a versioning strategy tailored to your consumer types, with deprecation policy and migration guides.

💡

Pro tip: Describe your slowest-moving consumers. Claude calibrates the deprecation timeline to your real constraints.

Rate Limiting Design

25/100

<context> API type: [REST / GraphQL] Infrastructure: [e.g. single Node.js server, multi-instance behind load balancer, serverless] Consumer types: [e.g. free tier users, paid subscribers, internal services] Current traffic: [e.g. 500 req/min peak] </context> <task> Design a rate limiting system for this API: [DESCRIBE YOUR API AND USE CASES] Define: 1. Rate limit tiers per consumer type 2. Algorithm choice (token bucket, sliding window, fixed window — with rationale) 3. Rate limit headers (X-RateLimit-*) 4. Response when limit is exceeded (429, Retry-After) 5. Implementation approach for your infrastructure 6. Exemptions (health checks, internal services) </task> <constraints> - Distributed deployments need Redis or similar — don't use in-memory for multi-instance - Include burst allowance to handle legitimate traffic spikes - Rate limit by API key, not just IP (IPs change and are shared) - Show the middleware implementation code </constraints>

Designs a tiered rate limiting system with the right algorithm for your infrastructure and consumer types.

💡

Pro tip: Describe your paid vs. free tiers. Claude designs limits that protect infrastructure without degrading paid user experience.

Auth and Authorization Flow

26/100

<context> Application type: [e.g. SaaS with multi-tenancy, B2C mobile app, internal tool] User types: [e.g. admins, members, guests, service accounts] Permissions model: [e.g. RBAC, ABAC, simple boolean flags] Tech stack: [e.g. Node.js + PostgreSQL, Python + Redis] </context> <task> Design the authentication and authorization flow for this system: [DESCRIBE YOUR PRODUCT AND ACCESS REQUIREMENTS] Cover: 1. Authentication flow (login, token issuance, refresh, logout) 2. Token storage and transmission strategy 3. Permission model with example roles and policies 4. How authorization is enforced (middleware, decorators, policy engine) 5. Multi-tenancy isolation if applicable 6. Service-to-service auth </task> <constraints> - JWT access tokens should be short-lived (15 min), refresh tokens longer - Never store tokens in localStorage for web apps — use httpOnly cookies - Authorization checks must happen server-side - Show the database schema for roles and permissions </constraints>

Designs a complete auth and authorization system with token lifecycle, permission model, and enforcement code.

💡

Pro tip: Describe your multi-tenancy model. Claude designs tenant isolation into the permission checks from the start.

Webhook System Design

27/100

<context> Platform type: [e.g. SaaS, payment processor, e-commerce] Events to emit: [list the event types — e.g. order.created, payment.failed] Consumer types: [e.g. third-party developers, internal microservices] Expected volume: [e.g. 10K events/day, 1M events/day] </context> <task> Design a webhook delivery system: [DESCRIBE YOUR PLATFORM AND USE CASES] Cover: 1. Event payload schema 2. Delivery mechanism (at-least-once vs exactly-once) 3. Retry strategy with backoff 4. Signature verification (HMAC) 5. Consumer registration and management API 6. Dead letter queue handling 7. Observability — delivery logs and dashboards </task> <constraints> - Always sign payloads — consumers must verify signatures - Retry with exponential backoff, cap at 24 hours - Include idempotency keys so consumers handle duplicates safely - Delivery should be async — never block the main request </constraints>

Designs a production-grade webhook system with retry logic, HMAC signatures, and delivery observability.

💡

Pro tip: Specify your event volume. Claude designs the queue infrastructure to match your throughput requirements.

Pagination Strategy

28/100

<context> API type: [REST / GraphQL] Data size: [e.g. up to 10M rows, stable dataset, frequently updated] Sort requirements: [e.g. by date, by relevance score, user-defined] Consumer: [e.g. infinite scroll UI, batch data export, admin table] </context> <task> Design the pagination strategy for this API: [DESCRIBE THE RESOURCE BEING PAGINATED] Cover: 1. Pagination mechanism (cursor vs. offset — recommend with rationale) 2. Request parameters and response envelope 3. How to handle insertions/deletions during pagination 4. Deep pagination performance at scale 5. Total count: when to include it and when not to </task> <constraints> - Cursor pagination for frequently updated data - Offset pagination only for stable data or when jump-to-page is required - Cursors must be opaque to clients — base64 encode internal state - Page size caps: set a maximum, default to a reasonable value - Show example request/response pairs </constraints>

Recommends cursor vs. offset pagination based on your data characteristics and consumer needs.

💡

Pro tip: Describe your UI pattern — infinite scroll, jump-to-page, or data export. Claude optimizes the strategy for your actual use case.

OpenAPI Spec Generator

29/100

<context> Framework: [e.g. Express, FastAPI, NestJS, Django REST] API version: [e.g. v1] Auth mechanism: [e.g. Bearer JWT, API key header] </context> <task> Generate a complete OpenAPI 3.1 specification for this API: [PASTE ROUTE HANDLERS OR API DESCRIPTION] Include: 1. Info block (title, version, description) 2. Server definitions 3. Security schemes 4. All paths with methods, parameters, request bodies, and responses 5. Reusable schemas in components 6. Error response schemas </task> <constraints> - Use $ref to avoid repeating schemas - Document all response codes, including 4xx and 5xx - Mark required fields explicitly - Add descriptions to all fields, not just types - Output valid YAML </constraints>

Generates a complete, valid OpenAPI 3.1 spec with reusable schemas and full error documentation.

💡

Pro tip: Ask Claude to generate this as an artifact. You can download it directly and import into Postman or Swagger UI.

Backward Compatibility Checker

30/100

<context> API type: [REST / GraphQL] Consumer types: [e.g. mobile apps, third-party integrators, internal clients] Release timeline: [e.g. deploy in 2 weeks, mobile app update cycle is 6 weeks] </context> <task> Analyze these API changes for backward compatibility: Before: [PASTE OLD API CONTRACT / SCHEMA] After: [PASTE NEW API CONTRACT / SCHEMA] For each change: 1. Is it breaking? (yes/no — with explanation) 2. Who is affected and how 3. Migration options for consumers 4. Recommended approach: simultaneous support, versioning, or staged rollout </task> <constraints> - Breaking = any change that requires consumer code to change to avoid errors - Additive changes (new optional fields, new endpoints) are generally safe - Removing fields, changing types, or making optional fields required are always breaking - Factor in mobile app update cycles for your timeline recommendation </constraints>

Identifies breaking vs. non-breaking API changes and recommends a safe rollout strategy for each.

💡

Pro tip: Include your mobile app release cycle. Claude factors in update lag when recommending deprecation timelines.

These prompts give you the what. Tutorials give you the why.

Learn when to use extended thinking, how to build Claude Projects, and workflows that compound. 300+ tutorials and growing.

Try AI Academy Free

Testing

10 prompts

Write Unit Tests

31/100

<context> Language: [LANGUAGE] Test framework: [e.g. Jest, Vitest, pytest, JUnit] Assertion library: [e.g. built-in, Chai, AssertJ] Mocking library: [e.g. Jest mocks, unittest.mock, Mockito] Coverage target: [e.g. 80% line coverage, all public methods] </context> <task> Write unit tests for this code: [PASTE CODE HERE] For each test: 1. Descriptive test name following: "should [behavior] when [condition]" 2. Arrange / Act / Assert structure 3. One assertion per test where possible 4. All edge cases and error paths </task> <constraints> - Test behavior, not implementation — don't test private methods directly - Mock all external dependencies (network, database, filesystem) - Include: happy path, boundary values, null/empty inputs, error conditions - Don't test framework code — test your logic - Each test must be independent and runnable in isolation </constraints>

Writes comprehensive unit tests covering happy paths, edge cases, and error conditions.

💡

Pro tip: Add your test file conventions to a Claude Project. Claude will match your existing test structure and naming.

Integration Tests for API

32/100

<context> Framework: [e.g. Express, FastAPI, NestJS] Test framework: [e.g. Supertest + Jest, pytest + httpx] Database: [e.g. PostgreSQL — use test DB or in-memory?] Auth: [e.g. JWT — how to generate test tokens] </context> <task> Write integration tests for these API endpoints: [PASTE ROUTE HANDLERS OR ENDPOINT DESCRIPTIONS] For each endpoint, test: 1. Successful response with valid input 2. Input validation failures (400) 3. Auth failures (401/403) 4. Not found cases (404) 5. Concurrent access if relevant </task> <constraints> - Use a real test database, not mocks — integration tests should hit the actual DB - Reset database state between tests - Test the full HTTP request/response cycle - Include setup and teardown fixtures - Seed realistic test data </constraints>

Writes full integration tests that hit real database and validate the complete request/response cycle.

💡

Pro tip: Paste your auth middleware alongside the routes. Claude writes helper functions to generate valid test tokens.

Test Strategy

33/100

<context> Project type: [e.g. REST API, React SPA, CLI tool, microservice] Tech stack: [STACK] Team size: [e.g. 4 developers] Current test coverage: [e.g. 0%, 40%, "unit tests only"] Deployment frequency: [e.g. daily, weekly] </context> <task> Design a testing strategy for this project: [DESCRIBE THE PROJECT AND ITS MAIN FEATURES] Define: 1. The testing pyramid — ratio of unit / integration / E2E tests 2. What to test at each level and what not to 3. Recommended frameworks and libraries with rationale 4. CI integration — what runs on PR, what runs on merge 5. Coverage targets by module type 6. How to test the hardest parts (auth, payments, real-time features) </task> <constraints> - Optimize for fast feedback in CI — slow tests don't get run - E2E tests for critical user paths only - Define clear rules for when to write unit vs. integration tests - Include a prioritized backlog for reaching target coverage </constraints>

Produces a testing pyramid strategy with framework recommendations and a prioritized coverage backlog.

💡

Pro tip: Describe your deployment frequency. Claude scales E2E test investment to your actual release risk.

Edge Case Tests

34/100

<context> Language: [LANGUAGE] Test framework: [FRAMEWORK] Domain: [e.g. payment processing, user authentication, file parsing] </context> <task> Generate exhaustive edge case tests for this function: [PASTE FUNCTION CODE HERE] Cover: 1. Boundary values (min, max, min-1, max+1) 2. Empty and null inputs 3. Malformed or unexpected input types 4. Concurrent access scenarios 5. Floating point precision issues (if numeric) 6. Unicode and special characters (if string) 7. Timezone edge cases (if date/time) </task> <constraints> - Each test should target a specific edge — don't combine multiple in one test - Include a comment explaining why each edge case matters - Focus on cases that could cause data corruption or security issues - Don't retest what unit tests already cover </constraints>

Generates exhaustive edge case tests covering boundaries, null inputs, encoding, and timing issues.

💡

Pro tip: Enable extended thinking. Claude systematically explores every input dimension rather than only obvious cases.

Mock and Stub Setup

35/100

<context> Language: [LANGUAGE] Test framework: [e.g. Jest, pytest, Go testing] Mocking library: [e.g. Jest mocks, unittest.mock, testify/mock] Dependencies to mock: [e.g. Stripe API, SendGrid, PostgreSQL, Redis] </context> <task> Set up mocks and stubs for this code's dependencies: [PASTE CODE WITH DEPENDENCIES] Provide: 1. Mock setup for each external dependency 2. Realistic test data that covers the contract of each dependency 3. How to simulate error responses 4. Shared fixtures for reuse across test files 5. How to verify the mock was called correctly </task> <constraints> - Mocks should match the actual interface of the dependency - Include both success and failure scenarios for each mock - Don't over-specify mocks — only assert on calls that matter to the test - Provide factory functions for test data, not hard-coded objects </constraints>

Creates a complete mock setup with realistic test data, error scenarios, and reusable fixtures.

💡

Pro tip: Paste the SDK types alongside your code. Claude creates type-safe mocks that catch interface drift.

Database Tests

36/100

<context> Database: [e.g. PostgreSQL 16] ORM: [e.g. Prisma, SQLAlchemy, GORM] Test framework: [FRAMEWORK] Test database strategy: [e.g. Docker container, in-memory SQLite, shared test DB] </context> <task> Write database integration tests for this repository layer: [PASTE REPOSITORY/DAO CODE] Test: 1. Basic CRUD operations 2. Complex queries with filters and joins 3. Transaction rollback behavior 4. Constraint violations (unique, foreign key, not null) 5. Concurrent writes and optimistic locking 6. Migration compatibility </task> <constraints> - Use transactions to isolate tests and roll back after each - Seed realistic data volumes to catch query performance issues - Test that constraints are enforced, not just the happy path - Include a database setup/teardown fixture </constraints>

Tests your database layer including transactions, constraint violations, and concurrent write behavior.

💡

Pro tip: Add your schema migrations to the context. Claude writes tests that verify migration behavior, not just current state.

End-to-End Tests

37/100

<context> Framework: [e.g. Playwright, Cypress, Selenium] Application URL: [local dev URL] Browser targets: [e.g. Chrome, Firefox, Safari] Critical user paths: [list the flows that must never break] </context> <task> Write end-to-end tests for these critical user flows: [DESCRIBE THE USER FLOWS] For each flow: 1. Full step-by-step test script 2. Assertions at each significant step 3. How to handle dynamic content (loading states, async data) 4. Cleanup after each test </task> <constraints> - Test only the critical paths — E2E tests are expensive - Use data-testid attributes for selectors, not CSS classes - Make tests independent — each test creates its own data - Add retries for flaky network interactions - Tests must pass in headless mode for CI </constraints>

Writes E2E tests for critical user flows with stable selectors and CI-compatible configuration.

💡

Pro tip: List your 3 most business-critical flows. Claude prioritizes those and avoids costly tests for low-risk paths.

Property-Based Tests

38/100

<context> Language: [LANGUAGE] Library: [e.g. fast-check, Hypothesis, QuickCheck, proptest] Function type: [e.g. pure function, parser, serializer, sort algorithm] </context> <task> Write property-based tests for this function: [PASTE FUNCTION CODE] For each property: 1. State the property in plain English 2. The generator(s) for input data 3. The property assertion 4. Why this property should always hold </task> <constraints> - Focus on universal properties: commutativity, idempotency, round-trip, invariants - Use realistic generators — not just random bytes - Include at least one shrinking example to show how failures are reported - Complement, don't replace, example-based tests </constraints>

Writes property-based tests that find edge cases your examples miss by generating thousands of inputs.

💡

Pro tip: Enable extended thinking. Claude identifies non-obvious invariants before writing the generators.

Test Data Factory

39/100

<context> Language: [LANGUAGE] ORM/schema: [e.g. Prisma schema, SQLAlchemy models] Test framework: [FRAMEWORK] Entities needed: [list the main entities — e.g. User, Order, Product] </context> <task> Build a test data factory for these entities: [PASTE SCHEMA OR MODEL DEFINITIONS] Provide: 1. Factory functions for each entity with sensible defaults 2. Override support for specific fields 3. Relationship builders (e.g. createUserWithOrders) 4. Trait/state support (e.g. createUser.admin(), createOrder.cancelled()) 5. Database insertion helpers </task> <constraints> - Defaults should be valid and realistic, not "test", "[email protected]" - Factories must respect database constraints - Make it easy to create related entities in one call - Factories should be composable and reusable across all test files </constraints>

Creates a composable test data factory with realistic defaults, traits, and relationship builders.

💡

Pro tip: Add this factory to a Claude Project. Claude will use it consistently when writing new tests across the codebase.

Regression Tests

40/100

<context> Language: [LANGUAGE] Test framework: [FRAMEWORK] Bug report or incident: [describe the bug that occurred] Affected version: [e.g. v2.3.1] </context> <task> Write regression tests for this bug: Bug description: [DESCRIBE THE BUG] Affected code: [PASTE THE FIXED CODE] Provide: 1. A failing test that reproduces the original bug 2. Confirmation the test passes after the fix 3. Related edge cases that could regress similarly 4. A test name that documents the bug clearly </task> <constraints> - The test must fail on the pre-fix code and pass on the fixed code - Name the test to include the bug/ticket ID for traceability - Cover the exact input that triggered the original bug - Add a comment linking to the incident or ticket </constraints>

Writes regression tests that permanently protect against a specific bug recurring.

💡

Pro tip: Paste both the buggy and fixed versions. Claude verifies the test fails on one and passes on the other.

Debugging

10 prompts

Diagnose Stack Trace

41/100

<context> Language: [LANGUAGE] Framework: [FRAMEWORK] Environment: [e.g. production, staging, local dev] Recent changes: [any deployments or code changes before this error appeared] Frequency: [e.g. happening on every request, 1% of requests, only under load] </context> <task> Diagnose this stack trace: [PASTE FULL STACK TRACE] Relevant code: [PASTE THE RELEVANT FUNCTION(S)] Provide: 1. What the error means 2. The most likely root cause 3. Step-by-step fix 4. How to verify the fix worked 5. How to prevent this class of error in future </task> <constraints> - Don't just explain the error type — identify the root cause in the pasted code - If multiple causes are possible, rank them by likelihood - If you need more context to be certain, say exactly what additional information would help </constraints>

Diagnoses a stack trace down to root cause with a ranked list of likely culprits and verified fixes.

💡

Pro tip: Enable extended thinking. Claude traces the execution path through the stack frames systematically.

Debug Memory Leak

42/100

<context> Language: [LANGUAGE] Runtime: [e.g. Node.js 20, JVM 21, Python 3.12] Symptoms: [e.g. memory grows 50MB/hour, OOM after 6 hours] Profiling data: [paste heap snapshot summary or memory profiler output if available] Load pattern: [e.g. 200 req/min, batch job running hourly] </context> <task> Help me find the memory leak in this code: [PASTE CODE / RELEVANT MODULES] Identify: 1. The likely leak source(s) 2. Why the memory isn't being released 3. The fix 4. How to verify the leak is resolved using profiling tools </task> <constraints> - Check for: unclosed resources, event listener accumulation, circular references, growing caches without eviction, closures holding large objects - Show before/after code for the fix - Recommend the specific profiling command to confirm the fix </constraints>

Traces a memory leak to its source and provides a fix with profiling commands to verify resolution.

💡

Pro tip: Paste your heap snapshot summary. Claude pinpoints which object type is accumulating instead of giving generic advice.

Debug Race Condition

43/100

<context> Language: [LANGUAGE] Concurrency model: [e.g. async/await, threads, goroutines] Symptoms: [e.g. intermittent data corruption, occasional 500s under load, test passes locally but fails in CI] Reproduction rate: [e.g. 1 in 50 runs, only under 100 concurrent users] </context> <task> Find and fix the race condition in this code: [PASTE CODE] Provide: 1. The exact sequence of events that causes the race 2. Why it's hard to reproduce (timing sensitivity) 3. The fix 4. A test that can reliably detect this race </task> <constraints> - Draw a timeline diagram showing the interleaved execution if helpful - The fix should be minimal — don't introduce unnecessary locking - If a database transaction can solve it, prefer that over application-level locks - Explain why the fix eliminates the race, not just that it does </constraints>

Identifies the exact execution interleaving that causes a race condition and fixes it with a reproducing test.

💡

Pro tip: Enable extended thinking. Claude simulates concurrent execution paths to isolate the exact interleaving.

Fix Failing Test

44/100

<context> Language: [LANGUAGE] Test framework: [FRAMEWORK] When did it start failing: [e.g. after a specific commit, always been flaky, failing since dependency upgrade] CI environment details: [e.g. Ubuntu, Node 20, timezone UTC] </context> <task> Fix this failing test: Test code: [PASTE TEST] Implementation code: [PASTE IMPLEMENTATION] Test output: [PASTE FULL TEST FAILURE OUTPUT] Determine: 1. Is the test wrong or is the implementation wrong? 2. What exactly is failing and why? 3. The fix — test code, implementation, or both 4. If it's a flaky test, how to make it deterministic </task> <constraints> - Don't fix a test by weakening its assertions — that just hides the bug - If the implementation is wrong, fix the implementation - For flaky tests: eliminate time dependencies, random data, and shared state </constraints>

Determines whether the test or implementation is wrong and fixes the actual root cause.

💡

Pro tip: Paste the full test output including the diff. Claude uses the exact failure message, not just the test code.

Debug Network and API Issues

45/100

<context> Client: [e.g. React app, mobile app, backend service] Server: [e.g. REST API, third-party service] Protocol: [HTTP/HTTPS, WebSocket, gRPC] Error observed: [e.g. CORS error, 401 on specific endpoints, timeout after 30s] </context> <task> Debug this network/API issue: Request details: [PASTE CURL COMMAND, REQUEST CODE, OR NETWORK TAB SCREENSHOT DESCRIPTION] Response received: [PASTE RESPONSE OR ERROR] Diagnose: 1. What's causing the error 2. Is it a client-side or server-side issue 3. Step-by-step fix 4. How to verify it's resolved </task> <constraints> - For CORS: explain which header is missing and why - For auth errors: check token format, expiry, and header name - For timeouts: distinguish between connection timeout vs. read timeout - Provide the exact code or header change needed </constraints>

Diagnoses network and API errors — CORS, auth, timeouts — with the exact fix and verification steps.

💡

Pro tip: Paste the full network request and response headers. Most API issues hide in a single malformed header.

Debug CSS Layout

46/100

<context> Browser: [e.g. Chrome 124, Safari 17] Framework: [e.g. Tailwind, plain CSS, styled-components] Screen size: [e.g. issue on mobile only, all sizes, 1280px viewport] Expected behavior: [describe what it should look like] Actual behavior: [describe what you're seeing] </context> <task> Debug this CSS layout issue: HTML: [PASTE HTML] CSS: [PASTE CSS] Diagnose: 1. What's causing the layout to break 2. Why the browser is rendering it this way 3. The fix with corrected CSS 4. Cross-browser considerations </task> <constraints> - Explain the box model, stacking context, or flexbox/grid rule that's causing the issue - Fix the root cause — don't use magic numbers or negative margins to compensate - Test the fix against the stated browser and screen size </constraints>

Finds the exact CSS property causing a layout bug and explains the box model or stacking context behind it.

💡

Pro tip: Describe what the layout should look like. Claude validates the fix against your design intent, not just syntactic correctness.

Debug Database Query Performance

47/100

<context> Database: [e.g. PostgreSQL 16] Table sizes: [e.g. orders: 5M rows, customers: 200K rows] Current query time: [e.g. 8 seconds] Target query time: [e.g. under 200ms] Existing indexes: [paste d tablename output or index list] </context> <task> Debug and fix this slow query: Query: [PASTE SQL QUERY] EXPLAIN ANALYZE output (if available): [PASTE EXPLAIN ANALYZE] Diagnose: 1. Why the query is slow 2. Which part of the execution plan is the bottleneck 3. Exact indexes to create (CREATE INDEX statements) 4. Query rewrite if needed 5. Expected performance after the fix </task> <constraints> - Show the EXPLAIN ANALYZE interpretation, not just the fix - Specify index type: B-tree, GIN, partial, composite - Consider write performance cost of new indexes - If the query can't be made fast, suggest an alternative architecture (materialized view, denormalization) </constraints>

Interprets EXPLAIN ANALYZE output and provides exact index definitions and query rewrites.

💡

Pro tip: Always include EXPLAIN ANALYZE output. Claude cannot accurately diagnose slow queries from SQL alone.

Trace Data Flow

48/100

<context> Language: [LANGUAGE] System components involved: [e.g. API → queue → worker → database → webhook] Input: [describe the data entering the system] Expected output: [describe what should come out] Actual output: [describe what's actually happening — wrong value, missing data, corruption] </context> <task> Trace this data flow to find where it goes wrong: [PASTE RELEVANT CODE ACROSS THE PIPELINE] For each stage: 1. What data enters 2. What transformation happens 3. What data exits 4. Where the data could be getting corrupted or lost </task> <constraints> - Follow the data from input to output step by step - Identify every transformation, serialization, and type conversion - Flag any implicit type coercions or lossy conversions - Show what to log at each stage to verify the fix </constraints>

Traces data through a multi-stage pipeline to find exactly where a value gets corrupted or lost.

💡

Pro tip: Label which stage produces wrong output. Claude focuses the trace on the upstream transforms that feed that stage.

Debug Auth Failure

49/100

<context> Auth mechanism: [e.g. JWT, session cookies, OAuth2, API keys] Framework: [FRAMEWORK] Error: [e.g. 401 on all requests, token expires immediately, login loop] When it started: [e.g. after dependency upgrade, after env change, intermittent] </context> <task> Debug this authentication failure: Auth middleware/code: [PASTE AUTH CODE] Token generation code: [PASTE TOKEN GENERATION] Error/symptom: [DESCRIBE IN DETAIL] Diagnose: 1. What's failing and at which step 2. The most likely cause 3. How to verify the diagnosis 4. The fix </task> <constraints> - Check: token expiry, clock skew, signing key mismatch, cookie attributes, CORS headers - Trace the full auth lifecycle: issue → transport → validate - Don't suggest disabling auth checks as a debugging step </constraints>

Traces an auth failure through the full token lifecycle — issuance, transport, and validation.

💡

Pro tip: Paste your token generation and validation code together. Auth bugs almost always involve a mismatch between the two.

Root Cause Analysis

50/100

<context> Incident type: [e.g. production outage, data loss, performance degradation] Duration: [e.g. 45 minutes] Impact: [e.g. 20% of users affected, all writes failed] Timeline: [list events with timestamps] </context> <task> Perform a root cause analysis for this incident: Incident description: [DESCRIBE WHAT HAPPENED] Available evidence: [PASTE LOGS, METRICS, ALERTS] Using the 5 Whys method: 1. Walk through each "Why" level 2. Identify the true root cause (not just the proximate cause) 3. Distinguish between root cause, contributing factors, and triggers 4. Propose fixes at each level 5. Suggest monitoring to detect recurrence </task> <constraints> - The root cause is never "human error" — find the systemic condition that made error possible - Propose both immediate fix and long-term prevention - Include a postmortem action item list with owners and deadlines </constraints>

Runs a structured 5-Whys root cause analysis with systemic fixes and a postmortem action list.

💡

Pro tip: Include your timeline and metrics. Claude distinguishes the root cause from contributing factors using the evidence.

Most people use 10% of Claude. Tutorials unlock the rest.

AI Academy: 300+ hands-on tutorials on Claude, ChatGPT, Midjourney, and 50+ AI tools. New tutorials added every week.

Start Your Free Trial

Architecture

10 prompts

System Architecture Design

51/100

<context> Product type: [e.g. SaaS B2B, consumer mobile app, internal tool] Scale targets: [e.g. 10K users at launch, 1M users in 2 years] Team size: [e.g. 3 engineers] Budget constraints: [e.g. early-stage, minimize infrastructure cost] Non-functional requirements: [e.g. 99.9% uptime, GDPR compliance, sub-200ms response time] </context> <task> Design the system architecture for this product: [DESCRIBE THE PRODUCT AND ITS MAIN FEATURES] Provide: 1. Component diagram description 2. Technology choices with rationale 3. Data flow between components 4. How the architecture scales to the stated targets 5. What you'd build differently at 10x scale 6. The biggest architectural risks and mitigations </task> <constraints> - Match complexity to team size — a 3-person team shouldn't operate Kubernetes - Optimize for shipping speed at early stage, for scale at later stage - Explicitly state what's out of scope for v1 - Prefer managed services over self-hosted where practical </constraints>

Designs a right-sized system architecture matched to your team, budget, and growth targets.

💡

Pro tip: Enable extended thinking. Claude evaluates multiple architectural patterns before committing to a recommendation.

Microservices vs. Monolith

52/100

<context> Current state: [e.g. monolith, planning new system, existing microservices with pain points] Team structure: [e.g. 5 engineers on one team, 3 teams of 8] Deployment frequency: [e.g. once a week, multiple times daily] Main pain points: [e.g. slow deployments, scaling bottleneck, team coordination] </context> <task> Advise on monolith vs. microservices for this situation: [DESCRIBE THE SYSTEM AND CONTEXT] Cover: 1. Recommendation with rationale specific to this context 2. The specific benefits and costs of each approach here 3. If microservices: service decomposition boundaries 4. If monolith: how to structure it for future extraction 5. Migration path if changing approaches </task> <constraints> - Don't give a generic answer — tie the recommendation to the stated team and scale - Microservices have real operational costs: quantify them - "Modular monolith first" is often the right answer — say so if appropriate - If recommending microservices, define the service boundaries explicitly </constraints>

Gives a context-specific recommendation on architecture style with honest cost/benefit analysis.

💡

Pro tip: Describe your team boundaries, not just technical requirements. Conway's Law matters more than most architecture decisions.

Database Schema Design

53/100

<context> Database: [e.g. PostgreSQL 16] Domain: [e.g. multi-tenant SaaS, e-commerce, healthcare records] Scale: [e.g. 100K users, 50M events/month] Access patterns: [describe the main queries — reads and writes] </context> <task> Design the database schema for this system: [DESCRIBE THE DOMAIN AND ENTITIES] Provide: 1. Complete table definitions with types and constraints 2. Primary and foreign key strategy 3. Index strategy based on stated access patterns 4. Multi-tenancy isolation approach (if applicable) 5. Soft delete strategy (if needed) 6. Audit trail approach (if needed) </task> <constraints> - Normalize to 3NF by default, denormalize only where access patterns require it - UUIDs vs. serial integers — recommend based on use case - Timestamps: always store in UTC - Index every foreign key - Show the schema as valid SQL DDL </constraints>

Designs a normalized database schema with indexes, constraints, and multi-tenancy strategy as valid DDL.

💡

Pro tip: Describe your read/write ratio per table. Claude indexes for your actual query patterns, not just textbook recommendations.

Event-Driven Architecture

54/100

<context> System type: [e.g. e-commerce, fintech, IoT platform] Current pain points: [e.g. tight coupling, scaling bottlenecks, temporal dependencies] Scale: [e.g. 10K events/sec, 1M events/day] Infrastructure: [e.g. AWS, GCP, self-hosted, Kafka, RabbitMQ available] </context> <task> Design an event-driven architecture for this system: [DESCRIBE THE SYSTEM AND ITS PROCESSES] Cover: 1. Event taxonomy (domain events, commands, integration events) 2. Event schema design and versioning 3. Message broker choice with rationale 4. Consumer group strategy 5. Error handling: dead letter queues, retry policy 6. Idempotency handling 7. Observability: event tracing and monitoring </task> <constraints> - Define the event contract clearly — events are a public API - Address eventual consistency explicitly: what UI/UX does it require? - Don't use events where a simple synchronous call is clearer - Show a concrete example for the most complex flow </constraints>

Designs an event-driven system with event taxonomy, broker selection, and idempotent error handling.

💡

Pro tip: Describe your most complex business process. Claude maps it to a concrete event flow with all edge cases handled.

Caching Strategy

55/100

<context> Application type: [e.g. read-heavy API, real-time dashboard, e-commerce] Current performance issue: [e.g. DB CPU at 90%, p99 latency 2s] Data characteristics: [e.g. user profiles change rarely, product catalog updated hourly, prices change per-request] Infrastructure: [e.g. Redis available, CDN available, no additional infrastructure] </context> <task> Design a caching strategy for this system: [DESCRIBE THE SYSTEM AND ITS DATA ACCESS PATTERNS] For each cache layer: 1. What data to cache 2. Where to cache it (CDN, application, database) 3. Cache key design 4. TTL strategy 5. Invalidation approach 6. Consistency guarantees </task> <constraints> - Don't cache user-specific data in shared caches - Cache invalidation is the hard part — explain it explicitly - Estimate the cache hit ratio you expect - For financial or inventory data: explain consistency tradeoffs carefully </constraints>

Designs a layered caching strategy with invalidation logic, TTLs, and consistency guarantees.

💡

Pro tip: Describe your data freshness requirements per entity. Claude designs different TTL strategies for different data types.

Tech Stack Choice

56/100

<context> Product type: [e.g. B2B SaaS, mobile app, data pipeline] Team existing skills: [e.g. Python, Java, JavaScript] Scale requirements: [e.g. launch with 100 users, scale to 100K] Time to market: [e.g. MVP in 3 months] Budget: [e.g. bootstrapped, seed-funded, enterprise budget] </context> <task> Recommend a tech stack for this product: [DESCRIBE THE PRODUCT AND ITS REQUIREMENTS] For each layer (frontend, backend, database, infrastructure): 1. Recommended choice 2. Why it fits this specific context 3. The main alternative and when you'd choose it instead 4. Key risks and mitigations </task> <constraints> - Match the stack to the team's existing skills — retraining time is a real cost - Boring technology is often the right choice — don't recommend new tech for its own sake - Consider hiring: can you find engineers for this stack? - Estimate operational cost at stated scale </constraints>

Recommends a practical tech stack matched to your team skills, timeline, and scale requirements.

💡

Pro tip: List your team's current skills. Claude weights stack recommendations toward what your team can ship quickly.

CI/CD Pipeline Design

57/100

<context> Source control: [e.g. GitHub, GitLab, Bitbucket] Deployment target: [e.g. AWS ECS, Kubernetes, Heroku, VPS] Team size: [e.g. 4 engineers] Deployment frequency goal: [e.g. multiple times daily, weekly releases] Current pain points: [e.g. slow builds, manual deployments, flaky tests blocking deploys] </context> <task> Design a CI/CD pipeline for this project: [DESCRIBE THE APPLICATION AND DEPLOYMENT REQUIREMENTS] Define: 1. Pipeline stages and what runs at each stage 2. PR checks vs. merge checks vs. deploy pipeline 3. Test parallelization strategy 4. Deployment strategy (blue/green, canary, rolling) 5. Rollback mechanism 6. Secrets management in CI 7. Estimated pipeline run time </task> <constraints> - PR pipeline must complete in under 5 minutes — fast feedback is critical - Never put secrets in pipeline YAML files - Deployment should be one command or one button - Include a rollback procedure that takes under 2 minutes </constraints>

Designs a fast CI/CD pipeline with parallel test stages, safe deployment strategies, and sub-2-minute rollback.

💡

Pro tip: Describe your current slowest step. Claude restructures the pipeline to parallelize or eliminate it.

Horizontal Scaling Design

58/100

<context> Current architecture: [describe your current setup] Current scale: [e.g. single server, handling 500 req/min] Target scale: [e.g. 50K req/min, 10x current] Current bottleneck: [e.g. CPU bound, database connections, in-memory session state] </context> <task> Design the path to horizontal scaling for this system: [DESCRIBE THE CURRENT ARCHITECTURE] Address: 1. Current bottlenecks that prevent horizontal scaling 2. What needs to be made stateless 3. Database scaling strategy (read replicas, sharding, connection pooling) 4. Session and cache distribution 5. Load balancing strategy 6. The step-by-step migration plan </task> <constraints> - Identify what needs to change before adding more instances - Stateful components (sessions, file storage, queues) must be externalized first - Database connections per instance × instance count must not exceed DB limits - Provide a migration path that doesn't require downtime </constraints>

Identifies scaling blockers and designs a stateless, horizontally scalable architecture with a zero-downtime migration path.

💡

Pro tip: Describe your current bottleneck metric. Claude focuses on the actual constraint, not a hypothetical future one.

Architecture Decision Record

59/100

<context> Decision to document: [e.g. "switch from REST to GraphQL", "adopt Kafka for event streaming"] Decision status: [Proposed / Accepted / Deprecated] Decision makers: [who was involved] </context> <task> Write an Architecture Decision Record (ADR) for this decision: Context and problem: [DESCRIBE THE PROBLEM BEING SOLVED] Options considered: [LIST THE OPTIONS EVALUATED] Write the ADR with: 1. Title (ADR-NNN: Short noun phrase) 2. Status 3. Context (problem and forces) 4. Decision (what was decided and why) 5. Consequences (positive, negative, neutral) 6. Alternatives considered and why they were rejected </task> <constraints> - An ADR is a record of a decision, not a proposal — write in past/present tense - Be honest about the negative consequences - Future readers should understand why this was right for the time, even if circumstances change - Keep it under 1 page </constraints>

Writes a clear, honest ADR that future engineers can use to understand why a decision was made.

💡

Pro tip: Ask Claude to generate the ADR as an artifact. You can drop it directly into your docs/architecture folder.

Data Pipeline Design

60/100

<context> Data sources: [e.g. PostgreSQL, Kafka events, third-party APIs, S3 files] Destination: [e.g. data warehouse, analytics DB, ML feature store] Volume: [e.g. 10GB/day, 1M events/hour] Latency requirement: [e.g. real-time, near-real-time <5min, daily batch] Infrastructure: [e.g. AWS, GCP, Azure, self-hosted] </context> <task> Design a data pipeline for this use case: [DESCRIBE THE DATA AND BUSINESS REQUIREMENTS] Cover: 1. Pipeline architecture (batch, streaming, lambda, kappa) 2. Technology choices with rationale 3. Data transformation strategy (ELT vs. ETL) 4. Schema evolution handling 5. Error handling and data quality checks 6. Monitoring and alerting 7. Estimated cost at stated volume </task> <constraints> - Match latency requirement to pipeline type — don't use streaming for batch use cases - Data quality checks must happen before data reaches consumers - Plan for schema changes in the source systems - Include idempotent reprocessing capability </constraints>

Designs a data pipeline with the right batch/streaming tradeoff, quality checks, and cost estimate.

💡

Pro tip: Specify your latency requirement precisely. Claude uses this to decide between Kafka, Spark, Airflow, or simpler batch jobs.

Documentation

10 prompts

Write a README

61/100

<context> Project type: [e.g. open source library, internal tool, API service, CLI] Target audience: [e.g. external developers, internal team, end users] Tech stack: [STACK] </context> <task> Write a README for this project: [DESCRIBE THE PROJECT OR PASTE KEY CODE/CONFIG] Include: 1. One-line description 2. Why it exists (problem it solves) 3. Quick start (get it running in under 5 minutes) 4. Installation 5. Usage with realistic examples 6. Configuration reference 7. Contributing guide (if open source) 8. License </task> <constraints> - The quick start should work copy-paste — test every command - Lead with value, not with implementation details - Use code blocks with language hints for all code samples - No badges unless they provide real information </constraints>

Writes a README that gets a developer productive in under 5 minutes with working copy-paste examples.

💡

Pro tip: Ask Claude to generate this as an artifact. You can review and edit the markdown directly.

Write API Documentation

62/100

<context> API type: [REST / GraphQL / SDK] Audience: [e.g. third-party developers, internal team] Format: [e.g. Markdown, OpenAPI, JSDoc] Existing docs: [none / partial — describe what exists] </context> <task> Write API documentation for these endpoints: [PASTE ROUTE HANDLERS, SCHEMA, OR ENDPOINT DESCRIPTIONS] For each endpoint/method: 1. Purpose (one sentence) 2. Request: method, URL, headers, parameters, body schema 3. Response: status codes, body schema 4. Error responses with codes 5. A working code example in [LANGUAGE] </task> <constraints> - Show realistic values in examples, not "string" or 123 - Document every possible error response - Code examples must be copy-pasteable and correct - Mark deprecated endpoints clearly </constraints>

Writes complete API documentation with realistic examples, error codes, and working code samples.

💡

Pro tip: Paste your OpenAPI spec alongside the route code. Claude enriches the spec with human-readable descriptions and examples.

Generate JSDoc and Docstrings

63/100

<context> Language: [e.g. TypeScript, Python, Java] Doc format: [e.g. JSDoc, Google docstrings, NumPy docstrings] Generator tool: [e.g. TypeDoc, Sphinx, Javadoc] </context> <task> Write documentation comments for these functions/classes: [PASTE CODE HERE] For each function/class: 1. Summary (one sentence describing what it does) 2. All parameters with types and descriptions 3. Return value and type 4. Exceptions/errors thrown 5. Usage example 6. Any gotchas or important behavior notes </task> <constraints> - Don't restate the code — explain intent and behavior - Document edge cases: what happens on null input, empty array, etc. - Use the exact doc format for the stated tool - Examples must be minimal and correct </constraints>

Generates complete, accurate doc comments that explain intent and edge cases, not just parameter types.

💡

Pro tip: Add your docstring format to a Claude Project. Claude maintains consistency with your existing documentation style.

Technical Design Document

64/100

<context> Audience: [e.g. engineering team, cross-functional stakeholders] Scope: [e.g. new feature, system redesign, API change] Timeline: [e.g. 6-week implementation] </context> <task> Write a technical design document for this feature: [DESCRIBE THE FEATURE OR SYSTEM CHANGE] Include: 1. Problem statement 2. Goals and non-goals 3. Proposed solution with rationale 4. Alternative approaches considered 5. System design (components, data flow, API changes) 6. Data model changes 7. Migration plan 8. Testing strategy 9. Open questions 10. Timeline and milestones </task> <constraints> - Non-goals are as important as goals — be explicit - The document should enable an engineer to start implementation without asking questions - Alternatives section must explain why they were rejected, not just list them - Open questions should have an owner and a deadline </constraints>

Writes a complete technical design doc that enables implementation to start without further clarification.

💡

Pro tip: Enable extended thinking. Claude explores edge cases and open questions before drafting, producing a more complete document.

Onboarding Guide

65/100

<context> Role: [e.g. backend engineer, full-stack developer] System complexity: [e.g. monolith, 5 microservices, 3 external integrations] Existing documentation: [describe what exists] Typical onboarding pain points: [what takes new engineers the longest to understand] </context> <task> Write a developer onboarding guide for this codebase: [DESCRIBE THE SYSTEM ARCHITECTURE AND KEY COMPONENTS] Cover: 1. System overview — what it does and why 2. Local dev environment setup (step-by-step) 3. Architecture overview with key components 4. How to make a change and deploy it 5. Common tasks with examples (add an endpoint, add a migration, etc.) 6. How to debug common issues 7. Who to ask when stuck </task> <constraints> - Every setup step must have a success verification command - Anticipate the top 5 things new engineers get stuck on - Link to relevant code files by path, not by description - The guide should get a new engineer making their first commit in one day </constraints>

Writes an onboarding guide that gets a new engineer making their first commit in one day.

💡

Pro tip: Add your codebase structure to the context. Claude references actual file paths so new engineers can navigate immediately.

Generate Changelog

66/100

<context> Project name: [PROJECT NAME] Version: [e.g. v2.4.0] Audience: [e.g. end users, developers, internal team] Format: [e.g. Keep a Changelog, GitHub releases, internal] </context> <task> Generate a changelog entry for this release: [PASTE GIT LOG OR LIST OF CHANGES] Format the changes as: - Added: new features - Changed: modifications to existing behavior - Deprecated: features to be removed - Removed: features that were removed - Fixed: bug fixes - Security: security patches For each entry: 1. User-facing description (not a commit message) 2. Upgrade notes if behavior changed 3. Migration steps if breaking changes </task> <constraints> - Write for the reader, not the developer — explain impact, not implementation - Breaking changes must be prominently marked - Don't include internal refactors that don't affect users - Keep each entry to one sentence </constraints>

Turns git log output into a user-facing changelog with upgrade notes and breaking change warnings.

💡

Pro tip: Paste your git log with commit messages. Claude filters internal commits and rewrites them in user-facing language.

Environment Setup Guide

67/100

<context> OS targets: [e.g. macOS, Ubuntu 22.04, Windows WSL2] Tech stack: [STACK] External services: [e.g. Stripe test account, AWS credentials, SendGrid] </context> <task> Write a local development environment setup guide: [DESCRIBE THE PROJECT AND ITS DEPENDENCIES] Cover: 1. Prerequisites with version requirements 2. Step-by-step installation 3. Environment variable setup with .env.example 4. Database setup and seeding 5. Running the development server 6. Running tests 7. Common setup errors and fixes </task> <constraints> - Every step must have a success verification command - Include the exact error messages for common failures, not just "it might not work" - Never put real credentials in examples — use placeholder values - Specify exact versions for all dependencies </constraints>

Writes a setup guide with verified steps, troubleshooting for common errors, and safe .env.example.

💡

Pro tip: List the top 3 things that broke during your last onboarding. Claude writes the troubleshooting section around your real errors.

Production Runbook

68/100

<context> System: [describe the production system] On-call audience: [e.g. engineers who may not know the system deeply] Critical paths: [e.g. checkout flow, auth service, data pipeline] Monitoring stack: [e.g. Datadog, Grafana, CloudWatch] </context> <task> Write a production runbook for this system: [DESCRIBE THE SYSTEM AND KNOWN FAILURE MODES] Include for each operation: 1. When to perform it (trigger conditions) 2. Step-by-step procedure with exact commands 3. Expected output at each step 4. How to verify success 5. Rollback procedure 6. Escalation path Operations to cover: - Restart service - Roll back deployment - Scale up/down - Clear cache - Handle database failover </task> <constraints> - Commands must be copy-pasteable — no pseudocode - Include the exact metrics and thresholds that trigger each procedure - Runbook must be usable by someone who doesn't know the system - Every procedure must have a "success" and "still broken" path </constraints>

Writes a production runbook with copy-pasteable commands, success verification, and escalation paths.

💡

Pro tip: Describe your last 3 incidents. Claude writes the runbook around your actual failure modes, not hypothetical ones.

Architecture Diagrams

69/100

<context> Diagram type: [e.g. system context, component, sequence, deployment] Audience: [e.g. engineers, non-technical stakeholders, new hires] Format: [e.g. Mermaid, PlantUML, text description for Lucidchart] </context> <task> Generate architecture diagram code for this system: [DESCRIBE THE SYSTEM ARCHITECTURE] Produce: 1. The diagram code in the specified format 2. A legend explaining any non-obvious notation 3. Key observations the diagram highlights </task> <constraints> - Keep diagrams focused — one diagram per concern - Use standard C4 model notation if doing context/component diagrams - Label all arrows with the protocol or data type - Don't try to show everything in one diagram </constraints>

Generates Mermaid or PlantUML diagram code for your system with C4 notation and labeled data flows.

💡

Pro tip: Ask Claude to generate this as an artifact. You can paste the Mermaid code directly into GitHub, Notion, or Miro.

Migration Guide

70/100

<context> Migration type: [e.g. v1 to v2 API, library upgrade, database schema change] Audience: [e.g. external API consumers, internal developers] Breaking changes: [list what changed] Migration effort estimate: [e.g. 1 hour, 1 day, 1 week] </context> <task> Write a migration guide for this change: Before (v1): [PASTE OLD API / CODE / SCHEMA] After (v2): [PASTE NEW API / CODE / SCHEMA] Cover: 1. Summary of what changed and why 2. Step-by-step migration instructions 3. Before/after code examples for each breaking change 4. Automated migration tools if available 5. How to run old and new versions in parallel during transition 6. Deadline for old version deprecation </task> <constraints> - Every breaking change must have a before/after code example - Include a migration checklist readers can check off - Estimate the effort required per change type - Provide a way to validate the migration succeeded </constraints>

Writes a migration guide with before/after examples, a migration checklist, and validation steps.

💡

Pro tip: Paste both the old and new code. Claude identifies every breaking change, including subtle behavioral ones you may have missed.

Performance

10 prompts

Optimize a Slow Function

71/100

<context> Language: [LANGUAGE] Runtime: [e.g. Node.js 20, Python 3.12] Profiling data: [paste profiler output, or describe: "called 10K times, avg 80ms"] Input characteristics: [e.g. array of 10K items, nested object 5 levels deep] </context> <task> Optimize this function for performance: [PASTE FUNCTION CODE] Provide: 1. Identify the bottleneck (algorithmic, I/O, memory allocation, etc.) 2. The optimized implementation 3. Time/space complexity before and after 4. Any tradeoffs (readability, memory for speed, etc.) 5. How to benchmark the improvement </task> <constraints> - Don't optimize at the cost of correctness — verify edge cases are preserved - Prefer algorithmic improvements over micro-optimizations - If the bottleneck is I/O, restructuring beats micro-optimization - Include the benchmark command to verify improvement </constraints>

Identifies the algorithmic bottleneck and delivers an optimized implementation with complexity analysis.

💡

Pro tip: Paste profiler output rather than describing symptoms. Claude focuses on the actual hot path, not guesswork.

Design a Caching Layer

72/100

<context> Application: [describe your system] Expensive operation: [e.g. DB query taking 500ms, third-party API call] Data characteristics: [e.g. per-user data, global catalog, session data] Cache infrastructure: [e.g. Redis, Memcached, in-memory, CDN] Consistency requirement: [e.g. eventual OK, must be fresh within 30 seconds] </context> <task> Design a caching layer for this operation: [PASTE THE CODE TO CACHE] Design: 1. Cache key schema 2. TTL and expiration strategy 3. Cache-aside, write-through, or write-behind — choose and justify 4. Cache invalidation on data change 5. Cache stampede prevention 6. Implementation code </task> <constraints> - Cache stampede (dog-pile effect) must be addressed - User-specific data must never bleed between users - Show the cache hit/miss code path - Include monitoring: what metrics to track </constraints>

Designs a caching layer with key schema, invalidation strategy, and stampede prevention.

💡

Pro tip: Describe your consistency requirement precisely. Cache invalidation is trivial when you know your staleness tolerance.

Optimize Database Queries

73/100

<context> Database: [e.g. PostgreSQL 16] ORM: [e.g. Prisma, raw SQL] Table sizes: [e.g. orders: 8M rows] EXPLAIN ANALYZE: [paste if available] Current query time: [e.g. 3.2 seconds] </context> <task> Optimize these database queries: [PASTE QUERIES OR ORM CODE] For each query: 1. Current execution plan bottleneck 2. Exact CREATE INDEX statements 3. Optimized query rewrite 4. Expected improvement 5. Any data model changes that would help long-term </task> <constraints> - Show CREATE INDEX with all options (CONCURRENTLY, partial conditions, composite columns) - If N+1: show the batched replacement - Consider index bloat on write-heavy tables - If a query cannot be made fast, propose materialized view or denormalization </constraints>

Rewrites slow queries, provides exact index DDL, and estimates improvement for your data volume.

💡

Pro tip: Always include EXPLAIN ANALYZE. Claude cannot accurately recommend indexes without seeing the actual execution plan.

Frontend Performance Audit

74/100

<context> Framework: [e.g. React, Next.js, Vue] Target metrics: [e.g. LCP < 2.5s, FID < 100ms, CLS < 0.1] Current Lighthouse score: [if available] Device target: [e.g. mobile on 4G, desktop broadband] </context> <task> Perform a performance audit of this frontend code: [PASTE COMPONENT CODE, BUILD CONFIG, OR DESCRIBE THE PAGE] Identify and fix: 1. Render-blocking resources 2. Unnecessary re-renders 3. Large bundle imports 4. Missing code splitting 5. Unoptimized images 6. Layout shifts 7. Missing memoization </task> <constraints> - Prioritize by Core Web Vitals impact - Show the React DevTools or bundle analyzer finding alongside the fix - Code splitting: show the exact dynamic import syntax - Memoization: only where it actually saves renders </constraints>

Audits frontend code against Core Web Vitals with specific fixes for render blocking, bundle size, and re-renders.

💡

Pro tip: Paste your Lighthouse report JSON. Claude prioritizes fixes by their impact on your actual CWV scores.

Reduce Bundle Size

75/100

<context> Framework: [e.g. React, Next.js, Vue] Bundler: [e.g. webpack, Vite, Rollup] Current bundle size: [e.g. 1.2MB gzipped main chunk] Bundle analyzer output: [paste top modules by size if available] </context> <task> Reduce the JavaScript bundle size for this application: [PASTE PACKAGE.JSON AND/OR KEY IMPORT PATTERNS] Identify: 1. Largest dependencies and whether they can be replaced 2. Tree-shaking opportunities (bad: import lodash, good: import debounce from lodash/debounce) 3. Code splitting opportunities with exact dynamic import code 4. Dependencies used only server-side that are leaking to the client 5. Duplicate dependencies </task> <constraints> - Show exact before/after import syntax for each fix - Estimate size reduction for each recommendation - Don't replace stable dependencies with obscure alternatives - Test that tree-shaking is actually working — many libraries claim it but don't </constraints>

Identifies bundle bloat from bad imports, missing code splitting, and server-only leaks with size estimates.

💡

Pro tip: Paste your bundle analyzer output. Claude focuses on the actual large modules rather than hypothetical savings.

Optimize API Response Time

76/100

<context> Framework: [e.g. Express, FastAPI, NestJS] Current p50/p99: [e.g. 200ms / 1.8s] Target p99: [e.g. 300ms] Profiling available: [e.g. APM traces, yes/no] </context> <task> Optimize the response time of this API endpoint: [PASTE ROUTE HANDLER AND CALLED FUNCTIONS] Analyze and fix: 1. Serial operations that can be parallelized 2. Unnecessary data loading (overfetching from DB) 3. Missing DB indexes for this query path 4. N+1 query patterns 5. Synchronous operations that should be async 6. Response payload size (projection, pagination) </task> <constraints> - Parallelize independent operations with Promise.all or equivalent - Reduce DB round trips first — they dominate latency - Show the before/after execution timeline - Don't add caching without defining invalidation </constraints>

Restructures API handlers to parallelize operations, eliminate N+1 queries, and reduce DB round trips.

💡

Pro tip: Paste APM trace data if you have it. Claude identifies the actual slow segment rather than guessing.

Memory Optimization

77/100

<context> Language: [LANGUAGE] Runtime: [e.g. Node.js, JVM, Python] Memory limit: [e.g. container limited to 512MB] Current usage: [e.g. peaks at 900MB, OOM after processing 10K records] Data volume: [e.g. processes files up to 5GB, 1M objects in memory] </context> <task> Optimize memory usage in this code: [PASTE CODE] Identify and fix: 1. Large objects loaded fully when streaming would work 2. Unnecessary data copies 3. Objects held in memory longer than needed 4. Memory-inefficient data structures 5. Missing pagination on large data sets </task> <constraints> - Stream large data instead of loading into memory - Show memory before/after estimates - Buffer/batch size should be tunable, not hardcoded - Don't sacrifice error handling for memory savings </constraints>

Converts memory-intensive code to streaming patterns and efficient data structures with before/after estimates.

💡

Pro tip: Include your memory limit. Claude designs the solution to fit within your container constraints, not just reduce usage generically.

Implement Lazy Loading

78/100

<context> Framework: [e.g. React, Vue, Angular] Type of lazy loading needed: [e.g. route-level, component, images, data] Current behavior: [e.g. all components bundled in main chunk, all images load on page load] Target: [e.g. reduce initial load by 40%] </context> <task> Implement lazy loading for this application: [PASTE CURRENT ROUTING/COMPONENT CODE] Implement: 1. Route-level code splitting with dynamic imports 2. Component lazy loading with loading fallbacks 3. Image lazy loading with placeholder strategy 4. Data lazy loading on scroll or interaction 5. Prefetching strategy for likely-next resources </task> <constraints> - Show exact dynamic import syntax for your framework - Loading fallbacks must prevent layout shift - Prefetch on hover/focus, not on page load - Don't lazy-load above-the-fold content </constraints>

Implements route-level code splitting, component lazy loading, and image deferral with layout-shift-free fallbacks.

💡

Pro tip: Describe your above-the-fold content. Claude keeps it eagerly loaded while deferring everything below it.

Optimize Batch Processing

79/100

<context> Language: [LANGUAGE] Current throughput: [e.g. 500 records/second] Target throughput: [e.g. 5,000 records/second] Data source: [e.g. PostgreSQL, S3 CSV, Kafka] Bottleneck: [e.g. DB writes, CPU processing, I/O] </context> <task> Optimize this batch processing job: [PASTE PROCESSING CODE] Improve: 1. Batch size tuning 2. Parallel processing with worker pools 3. DB bulk operations instead of row-by-row 4. Memory-efficient streaming 5. Checkpointing for resumable processing 6. Progress reporting </task> <constraints> - Bulk DB inserts/updates are always faster than row-by-row - Worker pool size should be tunable, not hardcoded - Include checkpointing so the job can resume after failure - Show before/after throughput estimate </constraints>

Optimizes batch jobs with bulk DB operations, worker pools, and checkpointing for 10x throughput gains.

💡

Pro tip: Describe your bottleneck — CPU, I/O, or DB. Claude focuses optimization on the actual constraint, not the whole pipeline.

Write a Performance Benchmark

80/100

<context> Language: [LANGUAGE] Benchmark framework: [e.g. Benchmark.js, criterion, pytest-benchmark, JMH] What to benchmark: [describe the function or operation] Comparison: [e.g. old vs. new implementation, algorithm A vs. B] </context> <task> Write a performance benchmark for this code: [PASTE CODE TO BENCHMARK] Provide: 1. Benchmark setup that eliminates JIT warmup bias 2. Multiple input sizes to show scaling behavior 3. Statistical correctness (multiple runs, confidence intervals) 4. Memory allocation measurement where relevant 5. Interpretation: what the numbers mean for production </task> <constraints> - Prevent dead code elimination — use benchmark results - Include warmup iterations before measuring - Test at multiple input sizes, not just one - Translate microbenchmark results into real-world impact estimates </constraints>

Writes a statistically valid benchmark with warmup, multiple input sizes, and real-world impact interpretation.

💡

Pro tip: Describe your production input characteristics. Claude writes benchmark inputs that match your actual workload distribution.

Security

10 prompts

Security Audit

81/100

<context> Application type: [e.g. SaaS web app, public API, internal tool] Tech stack: [STACK] Data sensitivity: [e.g. PII, financial data, health records] Compliance requirements: [e.g. SOC 2, HIPAA, PCI-DSS, GDPR] Last security review: [e.g. never, 6 months ago] </context> <task> Perform a security audit of this codebase: [PASTE CODE OR DESCRIBE THE SYSTEM] Check for: 1. OWASP Top 10 vulnerabilities 2. Authentication and authorization flaws 3. Input validation gaps 4. Sensitive data exposure 5. Dependency vulnerabilities 6. Security misconfigurations 7. Secrets in code or config For each finding: severity, exploit scenario, and fix. </task> <constraints> - Focus on exploitable findings, not theoretical risks - Compliance gaps should map to specific controls (SOC 2 CC6, PCI DSS 6.3, etc.) - Rank by CVSS score or practical impact - Provide a prioritized remediation backlog </constraints>

Audits code against OWASP Top 10 with exploitable findings, CVSS-ranked severity, and a remediation backlog.

💡

Pro tip: Enable extended thinking. Claude traces data flows through your code to find injection points that static analysis misses.

Design Auth Flow

82/100

<context> Application type: [e.g. SaaS, B2C mobile app, API] Auth requirements: [e.g. social login, SSO/SAML, MFA, magic links] Compliance: [e.g. SOC 2, HIPAA] Tech stack: [STACK] </context> <task> Design a secure authentication system for this application: [DESCRIBE THE PRODUCT AND USER TYPES] Cover: 1. Registration and login flow 2. Token strategy (access + refresh tokens) 3. MFA implementation 4. Password storage (hashing algorithm and parameters) 5. Session invalidation 6. Brute force and credential stuffing protection 7. Account recovery flow security </task> <constraints> - Passwords: bcrypt with cost factor 12 or Argon2id - Access tokens: short-lived (15 min), never stored client-side in localStorage - Refresh tokens: rotate on use, store hash in DB - Rate limit all auth endpoints - Account recovery must not be weaker than the primary auth </constraints>

Designs a secure auth system with hardened token strategy, MFA, and brute force protection.

💡

Pro tip: Specify your compliance requirements. Claude includes the specific controls needed for SOC 2 or HIPAA audit evidence.

Input Validation

83/100

<context> Language: [LANGUAGE] Framework: [FRAMEWORK] Input sources: [e.g. HTTP request body, query params, file uploads, webhook payloads] Validation library: [e.g. Zod, Joi, Pydantic, manual] </context> <task> Implement comprehensive input validation for these endpoints: [PASTE ROUTE HANDLERS OR DESCRIBE INPUTS] For each input: 1. Type validation 2. Format validation (regex, enum, range) 3. Business rule validation 4. Sanitization where needed 5. Error messages that are helpful but not exploitable </task> <constraints> - Validate on ingress — never trust external input - Server-side validation is mandatory even if client-side exists - Error messages must not leak schema or internal details - File uploads: validate type by magic bytes, not extension - Reject unexpected fields — don't silently ignore them </constraints>

Implements schema-based input validation with sanitization, magic-byte file validation, and safe error messages.

💡

Pro tip: Paste your Zod or Pydantic schema. Claude extends it with security-focused refinements beyond basic type checking.

SQL Injection and XSS Review

84/100

<context> Language: [LANGUAGE] Database: [e.g. PostgreSQL, MySQL] Frontend framework: [e.g. React, Vue, plain HTML] User input surfaces: [e.g. search, comments, profile fields, file names] </context> <task> Review this code for SQL injection and XSS vulnerabilities: [PASTE CODE] For each vulnerability: 1. Exact location 2. Attack vector with a concrete payload example 3. Severity and impact 4. The secure replacement code </task> <constraints> - SQL injection: parameterized queries are the fix — not escaping - Stored XSS: find where user content is persisted and where it's rendered - Reflected XSS: find where input echoes back in responses - DOM XSS: find innerHTML, document.write, eval with user data - Show the exploit payload and the sanitized output </constraints>

Finds SQL injection and XSS vectors with concrete payloads and parameterized query fixes.

💡

Pro tip: Enable extended thinking. Claude traces each user input through storage and rendering paths to find stored XSS.

RBAC Permissions Design

85/100

<context> Application type: [e.g. multi-tenant SaaS, internal tool] User types: [describe roles — e.g. owner, admin, member, viewer, billing-only] Resource types: [e.g. projects, documents, team members, billing] Multi-tenancy: [yes/no — describe isolation requirements] </context> <task> Design a role-based access control system: [DESCRIBE THE PERMISSION REQUIREMENTS] Provide: 1. Role and permission taxonomy 2. Database schema for roles and permissions 3. Permission check implementation 4. Middleware/decorator for enforcing access 5. How to handle cross-tenant access 6. Audit logging for access decisions </task> <constraints> - Default deny: if no explicit permission, access is denied - Permissions must be checked server-side, every time - Tenant isolation must be enforced at the query level, not just application level - Show how permission checks compose for nested resources </constraints>

Designs an RBAC system with schema, enforcement middleware, and tenant-level isolation at the query layer.

💡

Pro tip: Describe your most complex permission scenario. Claude designs the system to handle your edge cases, not just simple roles.

Secrets Management

86/100

<context> Infrastructure: [e.g. AWS, GCP, self-hosted, Docker Swarm] Current approach: [e.g. .env files, hardcoded, environment variables] Secrets types: [e.g. DB passwords, API keys, JWT secrets, TLS certificates] Team size: [e.g. 5 engineers] Compliance: [e.g. SOC 2, PCI-DSS] </context> <task> Design a secrets management system for this infrastructure: [DESCRIBE YOUR CURRENT SETUP AND PAIN POINTS] Cover: 1. Secrets storage and retrieval (vault solution) 2. Secrets rotation strategy per secret type 3. Access control (who/what can access which secret) 4. Audit logging 5. CI/CD integration — how secrets reach the pipeline 6. Developer workflow — how engineers use secrets locally 7. Incident response if a secret is compromised </task> <constraints> - Never store secrets in version control, even encrypted - Secrets in environment variables are better than hardcoded, but not a complete solution - Define rotation frequency by secret risk level - Every secret access must be auditable </constraints>

Designs a complete secrets management system with rotation schedules, audit logging, and incident response.

💡

Pro tip: Describe your CI/CD setup. Claude designs the secret injection flow that works with your actual pipeline.

CORS and CSP Configuration

87/100

<context> Application type: [e.g. SPA with separate API, server-rendered app, public API] Frontend origin(s): [e.g. https://app.example.com] API origin: [e.g. https://api.example.com] Third-party resources: [e.g. Google Fonts, Stripe.js, Intercom, analytics] </context> <task> Configure CORS and Content Security Policy for this application: [DESCRIBE THE APPLICATION ARCHITECTURE] Provide: 1. CORS configuration (allowed origins, methods, headers, credentials) 2. CSP header with all required directives 3. How to handle CDN and third-party scripts in CSP 4. Nonce or hash strategy for inline scripts 5. Report-only mode for testing CSP before enforcing </task> <constraints> - Never use wildcard (*) origins with credentials: true - CSP should block inline scripts except via nonce - Start with report-only, then enforce - Provide the exact header values, not just directive descriptions - Include frame-ancestors to prevent clickjacking </constraints>

Produces exact CORS and CSP header values for your architecture with third-party script handling.

💡

Pro tip: List every third-party script on your page. Claude writes the CSP to allow exactly those and nothing else.

API Key Rotation

88/100

<context> Service: [describe the system with API keys] Key types: [e.g. customer API keys, internal service keys, third-party API keys] Current rotation: [e.g. never rotated, rotated manually, no process] Key usage: [e.g. 10K keys issued, used in customer integrations, webhooks] </context> <task> Design an API key rotation system: [DESCRIBE THE KEY MANAGEMENT REQUIREMENTS] Cover: 1. Key format and structure (prefix, entropy, checksum) 2. Storage: hash the key, never store plaintext 3. Rotation workflow — how to rotate without breaking integrations 4. Overlapping validity windows for zero-downtime rotation 5. Key revocation and emergency rotation 6. Audit logging per key usage </task> <constraints> - Never store API keys in plaintext — store a hash, show the key once - Keys must be revocable immediately - Support key prefixes for easy identification (e.g. sk_live_, pk_test_) - Rotation must not require downtime for the key owner </constraints>

Designs an API key system with hashed storage, zero-downtime rotation, and per-key audit logging.

💡

Pro tip: Describe your customer integration patterns. Claude designs the rotation window to match your customers' deployment cycles.

Dependency Vulnerability Scan

89/100

<context> Language: [LANGUAGE] Package manager: [e.g. npm, pip, cargo, go mod] Deployment environment: [e.g. production server, Docker container] </context> <task> Analyze these dependencies for security vulnerabilities: [PASTE package.json, requirements.txt, go.mod, OR EQUIVALENT] For each vulnerable dependency: 1. CVE identifier 2. Vulnerability description 3. CVSS score and severity 4. Fixed version 5. Whether it's a direct or transitive dependency 6. Migration notes if the fix requires API changes </task> <constraints> - Prioritize by CVSS score and whether the vulnerable code path is reachable - Distinguish between "severity critical" and "exploitable in this context" - Flag dependencies that are abandoned (no updates in 2+ years) - Recommend a scanning tool to automate this ongoing </constraints>

Identifies vulnerable dependencies with CVE IDs, CVSS scores, and migration notes for breaking upgrades.

💡

Pro tip: Include your lock file alongside your manifest. Claude identifies transitive dependency vulnerabilities, not just direct ones.

Incident Response Plan

90/100

<context> System type: [e.g. SaaS with PII, fintech, e-commerce] Team size: [e.g. 5-person engineering team] Compliance: [e.g. GDPR, HIPAA, PCI-DSS] On-call setup: [e.g. no formal on-call, PagerDuty, manual] </context> <task> Write a security incident response plan for this system: [DESCRIBE THE SYSTEM AND ITS MOST LIKELY INCIDENT TYPES] Cover: 1. Incident classification (P1/P2/P3) 2. Detection and alerting 3. Containment steps by incident type 4. Communication templates (internal, customer, regulator) 5. Evidence preservation 6. Regulatory notification timelines (GDPR 72-hour, etc.) 7. Post-incident review process </task> <constraints> - GDPR breaches must be reported within 72 hours — make this explicit - Containment steps must be actionable without deep system knowledge - Communication templates must be pre-approved, not written during an incident - Define "breach" clearly — not every security event is a reportable breach </constraints>

Writes an actionable incident response plan with containment steps, communication templates, and regulatory timelines.

💡

Pro tip: Describe your most likely incident scenarios. Claude writes the runbook for your actual risk profile, not generic threats.

Migration

10 prompts

Framework Upgrade

91/100

<context> Framework: [e.g. React, Next.js, Django, Spring Boot] Current version: [e.g. v14] Target version: [e.g. v15] Codebase size: [e.g. 50 components, 30 API routes] Key dependencies: [list major libraries that may also need upgrading] </context> <task> Plan and execute the upgrade from [CURRENT VERSION] to [TARGET VERSION]: [PASTE RELEVANT CODE SAMPLES OR DESCRIBE KEY PATTERNS USED] Provide: 1. All breaking changes that affect this codebase 2. Automated migration steps (codemods, CLI commands) 3. Manual migration steps with before/after code 4. Testing strategy to verify the upgrade 5. Rollback plan if the upgrade fails 6. Estimated effort per section </task> <constraints> - Apply automated codemods before manual changes - Test after each breaking change, not only at the end - List deprecated features separately from breaking changes - Rollback plan must be executable in under 10 minutes </constraints>

Plans a framework version upgrade with automated codemods, manual migration steps, and a rollback procedure.

💡

Pro tip: Paste examples of your most-used patterns. Claude identifies which breaking changes actually affect your codebase.

Database Migration

92/100

<context> Database: [e.g. PostgreSQL 16] ORM/migration tool: [e.g. Prisma, Flyway, Alembic, raw SQL] Change type: [e.g. add column, rename table, change column type, split table] Table size: [e.g. 10M rows] Downtime tolerance: [e.g. zero downtime, up to 5 minutes maintenance window] </context> <task> Write a safe database migration for this change: Current schema: [PASTE CURRENT SCHEMA] Target schema: [PASTE TARGET SCHEMA] Provide: 1. Migration script 2. Is this zero-downtime or does it require a maintenance window? 3. If zero-downtime: the expand/contract pattern steps 4. Data backfill strategy for existing rows 5. Rollback migration 6. Estimated runtime at stated table size </task> <constraints> - Never rename a column directly — use expand/contract - Backfill in batches to avoid lock escalation - Test the migration on a copy of production data first - Include the rollback migration alongside the forward migration </constraints>

Writes a safe database migration with expand/contract pattern, batched backfill, and rollback script.

💡

Pro tip: Include your table size and downtime tolerance. Claude chooses between direct migration and expand/contract based on your constraints.

REST to GraphQL Migration

93/100

<context> Current REST API: [describe endpoints and consumers] GraphQL server: [e.g. Apollo Server, Strawberry, GraphQL Yoga] Consumer types: [e.g. React frontend, mobile app, partner integrations] Timeline: [e.g. 3 months, must not break existing consumers] </context> <task> Plan the migration from REST to GraphQL: [PASTE REST ROUTE DEFINITIONS] Provide: 1. GraphQL schema that covers all current REST capabilities 2. Resolver implementations for key queries and mutations 3. How to run both REST and GraphQL in parallel during migration 4. Consumer migration guide 5. Which REST endpoints can be retired and when 6. N+1 query prevention strategy (DataLoader) </task> <constraints> - Don't break existing REST consumers during migration - The GraphQL schema should be designed for clients, not as a 1:1 REST map - Implement DataLoader for all list resolvers from day one - Define a sunset date for each REST endpoint </constraints>

Plans a REST-to-GraphQL migration with parallel operation, DataLoader setup, and consumer sunset schedule.

💡

Pro tip: Describe your consumer types. Claude designs the migration order to migrate the most flexible consumers first.

JavaScript to TypeScript Migration

94/100

<context> Current state: [e.g. pure JS, JSDoc-annotated JS, mixed JS/TS] Framework: [e.g. Node.js, React, Next.js] Codebase size: [e.g. 50 files, 10K lines] Strictness target: [e.g. strict mode, noImplicitAny only] </context> <task> Plan the JavaScript to TypeScript migration: [PASTE KEY FILES OR DESCRIBE THE CODEBASE STRUCTURE] Provide: 1. tsconfig.json for a gradual migration (allowJs: true) 2. Migration order: which files to migrate first 3. Type annotation strategy for common patterns in this codebase 4. How to handle third-party packages without types 5. Common "any" patterns to avoid 6. Strictness ramp-up plan </task> <constraints> - Gradual migration: start with allowJs, tighten over time - Never use any as a permanent solution — use unknown or proper types - Migrate leaf files (no dependencies) before core modules - Add types to shared utilities first for maximum leverage </constraints>

Plans a gradual JS-to-TS migration with tsconfig strategy, file ordering, and strictness ramp-up plan.

💡

Pro tip: Paste your most complex utility files. Claude writes the type definitions for your hardest patterns first.

Monolith to Microservices

95/100

<context> Current monolith: [describe the application] Pain points driving the migration: [e.g. deploy conflicts, scaling a specific feature, team autonomy] Team structure: [e.g. 3 teams of 6] Timeline: [e.g. 12-month migration] Infrastructure: [e.g. AWS, Kubernetes available] </context> <task> Plan the migration from monolith to microservices: [DESCRIBE THE MONOLITH ARCHITECTURE AND MODULES] Provide: 1. Service decomposition based on domain boundaries 2. Which service to extract first and why 3. The strangler fig pattern implementation 4. Data ownership strategy — how to split the database 5. Inter-service communication approach 6. Timeline and risk assessment per extraction </task> <constraints> - Extract the least-coupled service first to build the muscle - Don't split the database until the service boundary is proven - Distributed monolith is worse than a monolith — get service boundaries right - Each service must be independently deployable before the migration is done </constraints>

Plans a monolith-to-microservices migration using strangler fig with data ownership strategy and risk assessment.

💡

Pro tip: Describe the pain point driving the migration. Claude designs boundaries that solve your actual problem, not a canonical decomposition.

ORM Upgrade

96/100

<context> Current ORM: [e.g. Sequelize v6, TypeORM 0.2] Target ORM: [e.g. Prisma 5, Drizzle, SQLAlchemy 2.0] Database: [e.g. PostgreSQL] Codebase size: [e.g. 20 models, 50 query files] </context> <task> Plan the migration from [CURRENT ORM] to [TARGET ORM]: [PASTE CURRENT MODEL DEFINITIONS AND KEY QUERIES] Provide: 1. New schema definition equivalent 2. Query migration for the most common patterns 3. Transaction handling in the new ORM 4. Migration and seeder equivalent 5. Testing strategy during the migration 6. How to run both ORMs in parallel if needed </task> <constraints> - Migrate the schema definition first, then queries - Test each migrated query against the same database - Don't change business logic during the ORM migration - Document any behavior differences (e.g. eager vs. lazy loading defaults) </constraints>

Migrates ORM schema definitions and query patterns to a new ORM with behavior difference documentation.

💡

Pro tip: Paste your most complex queries. ORM migrations almost always break on edge cases in advanced query patterns.

State Management Migration

97/100

<context> Framework: [e.g. React, Vue] Current state management: [e.g. Redux, Vuex, MobX, Context API] Target state management: [e.g. Zustand, Pinia, Jotai, TanStack Query] App size: [e.g. 30 stores, 80 components] Main pain points: [e.g. boilerplate, performance, async complexity] </context> <task> Plan the migration to [TARGET STATE MANAGEMENT]: [PASTE CURRENT STORE DEFINITIONS AND KEY USAGE PATTERNS] Provide: 1. New store structure equivalent to current 2. Migration for async action patterns 3. Component update patterns 4. How to migrate incrementally without a big bang rewrite 5. Testing strategy for migrated state </task> <constraints> - Migrate one store at a time, not all at once - Keep existing tests passing throughout the migration - Async patterns: map thunks/sagas to the new async model explicitly - Don't change component behavior while migrating state management </constraints>

Migrates state management incrementally with store-by-store migration and async pattern mapping.

💡

Pro tip: Describe your most complex async flow. State management migrations always break on async edge cases first.

Legacy Code Modernization

98/100

<context> Language: [LANGUAGE] Framework: [FRAMEWORK or "no framework"] Code age: [e.g. 8-year-old codebase] Main problems: [e.g. untestable global state, callback hell, no types, deprecated APIs] Constraints: [e.g. can't break existing API, no downtime, limited test coverage] </context> <task> Create a modernization plan for this legacy code: [PASTE LEGACY CODE SECTIONS] Provide: 1. Current state assessment — what's most painful 2. Prioritized modernization backlog 3. Which patterns to migrate and in what order 4. How to introduce tests before refactoring 5. A strangler fig approach to replace components incrementally 6. What NOT to modernize (working code that doesn't change) </task> <constraints> - Don't rewrite working code — modernize only what's actively painful - Add tests before refactoring legacy code - The public API must remain stable throughout - Deliver value incrementally — no big bang rewrites </constraints>

Creates a prioritized modernization backlog with an incremental strangler fig approach and test-first refactoring.

💡

Pro tip: Describe which parts of the code change most frequently. Claude prioritizes modernizing high-churn areas for maximum ROI.

Cloud Provider Migration

99/100

<context> Current provider: [e.g. AWS] Target provider: [e.g. GCP, Azure, self-hosted] Services used: [e.g. EC2, RDS, S3, SQS, Lambda] Downtime tolerance: [e.g. zero downtime, 4-hour maintenance window] Data volume: [e.g. 500GB database, 2TB S3] Timeline: [e.g. 3 months] </context> <task> Plan the cloud provider migration: [DESCRIBE THE CURRENT ARCHITECTURE] Provide: 1. Service mapping: current service → target service equivalent 2. Data migration strategy (database, object storage, queues) 3. DNS cutover strategy 4. Running both providers in parallel during migration 5. Rollback plan 6. Cost comparison estimate 7. Critical path and timeline </task> <constraints> - Database migration is the highest-risk step — plan it in detail - DNS TTL must be reduced before cutover - Run both environments in parallel for at least one week before cutover - Define the exact rollback trigger conditions </constraints>

Plans a cloud provider migration with service mapping, parallel operation period, and DNS cutover strategy.

💡

Pro tip: Describe your highest-traffic time windows. Claude schedules the cutover for your lowest-risk period.

Database Engine Migration

100/100

<context> Current database: [e.g. MySQL 5.7] Target database: [e.g. PostgreSQL 16] Data volume: [e.g. 200GB, 50 tables] Downtime tolerance: [e.g. zero downtime, 2-hour window] Application tech stack: [STACK] </context> <task> Plan the migration from [CURRENT DB] to [TARGET DB]: [DESCRIBE THE SCHEMA AND CRITICAL TABLES] Provide: 1. Schema compatibility differences to address 2. Data type mapping between engines 3. Migration tool selection and configuration 4. Application changes needed (connection strings, SQL dialect, ORM config) 5. Data validation strategy post-migration 6. Zero-downtime migration approach (dual-write or CDC) 7. Rollback plan </task> <constraints> - Validate row counts and checksums after migration — don't assume success - Test the application fully on the new database before cutover - Dual-write period should be at least 1 week for high-stakes migrations - Document every SQL dialect difference found in the codebase </constraints>

Plans a database engine migration with data type mapping, dual-write strategy, and row-count validation.

💡

Pro tip: Paste your most complex stored procedures or queries. Engine migrations break on SQL dialect differences that simple schemas miss.

Frequently Asked Questions

Claude is one of the best AI models for coding. It handles large codebases (up to 200K tokens of context), follows complex multi-file instructions, and generates production-quality code — not just toy examples. It's particularly strong at code review, debugging, and understanding existing code patterns.
Claude Sonnet is the best everyday coding model — fast and capable. Use Claude Opus for complex architecture decisions, thorny debugging sessions, or when you need the highest quality output. Claude Haiku works for quick tasks like generating boilerplate or simple utilities.
Claude was specifically trained to parse XML tags like <context>, <task>, and <constraints>. They help Claude separate your background information from instructions from formatting requirements. For coding prompts, this means Claude understands what your codebase looks like (context), what you want built (task), and what rules to follow (constraints) — leading to more accurate code generation.
Yes. These prompts work in Claude's web interface, API, Claude Code (CLI), and any IDE integration that uses Claude. The XML structure is processed the same way regardless of the interface.
Claude supports up to 200,000 tokens of input — roughly 150,000 words or thousands of lines of code. You can paste entire files, full diffs, or multiple related files at once. More context generally leads to better output because Claude can match your existing patterns and understand the full picture.

Prompts are the starting line. Tutorials are the finish.

A growing library of 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ AI tools. New tutorials added every week.

14-day free trial. Cancel anytime.