Prompt Library

Design Better User Experiences with AI

36 copy-paste prompts

36 actionable ChatGPT prompts for every phase of UX — from research and personas to wireframes, testing, and accessibility audits.

User Research

5 prompts

User Interview Script Generator

1/36

I am conducting user research for [product/feature]. The target users are [describe user segment]. The research goal is to understand [specific question — e.g., why users abandon checkout, how they currently solve a problem, what frustrates them about existing tools]. Create a 30-minute user interview script with: (1) 3 warm-up questions that build rapport without leading the conversation, (2) 8-10 core questions that use open-ended phrasing (no yes/no questions), (3) 3 follow-up probes for when participants give surface-level answers, (4) 2 closing questions that capture anything I missed. For each question, add a note explaining what insight it is designed to uncover. Include a brief moderator guide with tips on when to stay silent and let the participant think.

Creates a research-backed interview script designed to uncover genuine user needs without leading or biasing responses.

💡

Pro tip: The best interview question is "Tell me about the last time you..." — it forces specific stories instead of hypothetical answers, which are unreliable.

Survey Design for Quantitative Research

2/36

I need to create a survey to measure [what you want to learn] from [target audience]. The survey will be distributed via [email, in-app, social media]. Target sample size: [number]. Design the survey: (1) write 12-15 questions covering [topics], (2) use the right question type for each — Likert scale for attitudes, multiple choice for behaviors, ranking for priorities, open-ended for discovery, (3) order questions from easy/engaging to complex/sensitive, (4) include 1-2 screening questions to filter out unqualified respondents, (5) add an attention check question to catch random clickers, (6) keep the estimated completion time under 5 minutes, and (7) flag any questions that have response bias risk (social desirability, acquiescence) and suggest neutral alternatives.

Builds a methodologically sound survey with proper question types, ordering, and bias controls.

💡

Pro tip: Every additional question reduces your completion rate by roughly 5-10 percent. If a question is nice to know but not need to know, cut it.

Competitive UX Audit Framework

3/36

I want to audit the user experience of [number] competitors in the [industry] space. The competitors are: [list names]. My product is [describe]. Create a structured UX audit framework: (1) define 8-10 evaluation criteria (onboarding flow, navigation clarity, task completion speed, error handling, visual hierarchy, mobile experience, accessibility, content clarity, etc.), (2) create a scoring rubric for each criterion (1-5 scale with descriptions of what each score means), (3) design a task-based evaluation — list 5 common user tasks and instructions for testing each across all competitors, (4) include a screenshot annotation template for documenting findings, (5) suggest a comparison matrix format for presenting results, and (6) provide heuristic evaluation questions based on Nielsen's 10 usability heuristics for each competitor screen.

Provides a systematic framework for evaluating competitor UX across consistent criteria to identify gaps and opportunities.

💡

Pro tip: Do not just document what competitors do. Focus on what they do poorly — that is where your differentiation opportunity lives.

Research Synthesis and Insight Extraction

4/36

I have completed [number] user interviews for [project/product]. Here are my raw notes: [Paste interview notes or key quotes from each participant] Synthesize this research: (1) identify the top 5 recurring themes across participants with supporting quotes, (2) create an affinity diagram structure grouping related observations, (3) distinguish between stated needs (what users say they want) and latent needs (what their behavior reveals they actually need), (4) flag contradictions between participants and suggest why they might disagree, (5) prioritize findings by frequency (how many participants mentioned it) and severity (how much it impacts their experience), and (6) translate each key finding into a design implication — a specific, actionable statement about what the product should do differently.

Transforms messy interview notes into structured insights with clear design implications.

💡

Pro tip: Quotes are the currency of user research. Always tie your insights back to direct quotes — they are far more persuasive to stakeholders than your interpretation alone.

Jobs-to-be-Done Interview Analysis

5/36

I am using the Jobs-to-be-Done framework to understand why users [hire/choose/switch to] [product category]. Here are notes from [number] interviews where I asked about their switch from [old solution] to [new solution]: [Paste notes or summarize key responses] Analyze using the JTBD framework: (1) identify the core functional job (the task they need to accomplish), (2) identify emotional jobs (how they want to feel) and social jobs (how they want to be perceived), (3) map the push forces (frustrations with the old solution) and pull forces (attractions of the new solution), (4) map the anxieties (fears about switching) and inertia (habits keeping them with the old solution), (5) write 3-5 job stories in the format "When [situation], I want to [motivation], so I can [outcome]", and (6) prioritize which jobs represent the biggest opportunity based on frequency and intensity.

Applies the Jobs-to-be-Done framework to interview data to uncover the real reasons users switch products.

💡

Pro tip: Users do not buy products. They hire them to make progress in their lives. The job story format forces you to think about the situation and motivation, not just the feature request.

Prompts get you started. Tutorials level you up.

A growing library of 300+ hands-on AI tutorials. New tutorials added every week.

Start 14-Day Free Trial

Wireframing & Prototyping

5 prompts

Low-Fidelity Wireframe Specification

6/36

I need to wireframe a [page type — e.g., dashboard, onboarding flow, settings page, checkout, search results] for [product]. The primary user goal on this page is [describe]. Secondary actions: [list]. The device target is [desktop/mobile/responsive]. Write a detailed wireframe specification: (1) define the content hierarchy — what appears first, second, third and why, (2) describe each UI element with position (top-left, center, below hero, etc.), approximate size, and purpose, (3) specify the interaction model — what happens when users click, hover, or scroll, (4) list all states the page needs: empty state, loading, populated, error, (5) define the information density — how many items are visible before scrolling, and (6) include annotations explaining design decisions. Format this so a designer can build the wireframe without a meeting.

Creates a detailed wireframe specification that communicates layout, hierarchy, and interaction logic to designers.

💡

Pro tip: The best wireframes are ugly on purpose. If you add visual polish too early, people critique the aesthetics instead of the structure and flow.

User Flow Mapping

7/36

Map the complete user flow for [task — e.g., signing up, purchasing, inviting a teammate, filing a report]. The product is [describe product]. The user starts from [entry point]. Define: (1) every screen or step in the happy path from start to completion, (2) all decision points where the user makes a choice and the branches that follow, (3) error states and how the user recovers from each one, (4) edge cases — what happens if the user has no data, hits a limit, loses connection, or has multiple accounts, (5) optional paths — where can users skip steps or take shortcuts, and (6) success confirmation — how does the user know they completed the task. Use a clear notation: [Screen Name] → [Action] → [Next Screen]. Mark decision points with diamonds and error states with red flags.

Documents every path, decision point, and edge case in a user flow to prevent design gaps that cause user confusion.

💡

Pro tip: Most design bugs live in the unhappy path. Spend as much time designing error recovery and edge cases as you spend on the happy path.

Component Specification for Design System

8/36

I need to specify a [component — e.g., data table, modal, dropdown, card, navigation bar, form field] for our design system. The component is used in [describe contexts]. Write a complete component specification: (1) anatomy — label every part of the component (container, label, icon, helper text, etc.), (2) variants — list all visual variants (sizes, states, themes) with when to use each, (3) states — define every state (default, hover, focus, active, disabled, error, loading) with visual descriptions, (4) behavior — interaction patterns including keyboard navigation, (5) content guidelines — character limits, truncation rules, placeholder text conventions, (6) accessibility requirements — ARIA attributes, screen reader behavior, focus management, and (7) do/don't examples — 3 correct uses and 3 common misuses with explanations.

Creates a thorough component specification covering anatomy, states, behavior, accessibility, and usage guidelines.

💡

Pro tip: Specify the disabled state for every interactive component. It is the most commonly forgotten state, and inconsistent disabled styles confuse users.

Prototype Test Plan

9/36

I have a [low-fi/high-fi] prototype of [feature/product] and want to test it with users before development. The prototype covers: [describe what screens/flows are included]. Create a prototype test plan: (1) define 3-5 task scenarios that test the core value proposition (write them as realistic user stories, not instructions), (2) specify success criteria for each task — completion rate, time on task, error rate, (3) design a think-aloud protocol with prompts for when participants go silent, (4) create a pre-test questionnaire to capture expectations, (5) create a post-test questionnaire including SUS (System Usability Scale) questions, (6) define the minimum number of participants needed and the recruitment criteria, and (7) build an observation template for note-taking during sessions. Include facilitation tips for staying neutral.

Plans a structured prototype test with task scenarios, success metrics, and observation templates.

💡

Pro tip: Write task scenarios as goals, not instructions. "You want to send money to a friend" lets users find their own path. "Click the transfer button" tests the button, not the design.

Design Critique Preparation

10/36

I am presenting my design for [feature/page] to my team for critique. The design solves [user problem] for [user segment]. Here is a description of the design: [Describe the design, key screens, and design decisions] Help me prepare for the critique: (1) write a 2-minute presentation script that frames the problem, shows the solution, and explains key decisions, (2) list 5 areas where I want specific feedback with focused questions for each, (3) anticipate the 5 most likely criticisms and prepare thoughtful responses that explain my rationale, (4) identify 2-3 aspects of the design I am least confident about and frame them as open questions for the group, (5) suggest how to capture and organize feedback during the session, and (6) recommend a follow-up process for incorporating feedback without redesigning everything.

Prepares you to present your design confidently, direct feedback productively, and handle criticism with clear rationale.

💡

Pro tip: Frame your critique requests specifically. "What do you think?" invites vague opinions. "Is the hierarchy clear enough to complete the task in under 10 seconds?" invites useful feedback.

Usability Testing

5 prompts

Moderated Usability Test Script

11/36

I need to run a moderated usability test on [product/feature]. The test will be conducted [in-person/remotely via Zoom]. Participants: [describe user segment]. Duration: [30/45/60 minutes]. The main flows to test are: [list 3-5 key tasks]. Create a complete moderator script: (1) introduction (2 minutes) — explain the session purpose, get consent, and set expectations about thinking aloud, (2) warm-up (3 minutes) — questions about their current workflow and tools, (3) task scenarios (core of session) — for each task write a natural scenario, define success criteria, list observation checkpoints, and include follow-up probes, (4) wrap-up (5 minutes) — overall impressions, comparison to current tools, and anything they would change. Include exact moderator phrasing for redirecting stuck participants without giving hints.

Creates a professional moderator script that guides the session while leaving room for genuine user behavior.

💡

Pro tip: When a participant asks "Should I click this?" never answer. Redirect with "What would you do if I was not here?" Their hesitation is the finding.

Unmoderated Remote Test Design

12/36

Design an unmoderated remote usability test for [product/feature] using [tool — UserTesting, Maze, Lyssna, etc.]. I cannot be present during the test, so instructions must be self-explanatory. Create: (1) a screener with 5 questions to recruit the right participants (include disqualifying criteria), (2) an introduction script participants read before starting (set context without biasing), (3) 4-5 task scenarios written clearly enough that participants can self-guide (include screenshots if needed), (4) follow-up questions after each task measuring difficulty (1-7 scale), confidence, and open-ended "why," (5) a post-test survey covering overall satisfaction and feature prioritization, and (6) success metrics I should track in the tool (completion rate, time, clicks, misclicks). Estimate the number of participants needed for reliable results.

Designs a self-running remote usability test with clear instructions, proper metrics, and built-in quality controls.

💡

Pro tip: Unmoderated tests need crystal-clear task descriptions. Test your task wording on a colleague first — if they misunderstand the instructions, your participants will too.

Usability Test Results Analysis

13/36

I ran a usability test with [number] participants on [product/feature]. Here are my raw observations: [Paste or summarize: for each participant, what tasks they completed, where they struggled, notable quotes, errors encountered] Analyze these results: (1) calculate task completion rates and average time per task, (2) identify the top 5 usability issues ranked by severity (frequency x impact), (3) classify each issue using Nielsen's severity rating (cosmetic, minor, major, catastrophic), (4) map issues to specific UI elements or flows, (5) provide a recommended fix for each issue with estimated effort (quick fix, medium redesign, major rework), (6) identify any positive findings — what worked well and should be preserved, and (7) write an executive summary suitable for sharing with stakeholders and developers. Include participant quotes to illustrate key issues.

Transforms raw usability observations into a prioritized, actionable issues report with severity ratings and recommended fixes.

💡

Pro tip: A severity matrix (frequency x impact) prevents you from fixing easy problems while ignoring the hard ones. One catastrophic issue affecting 80 percent of users matters more than five cosmetic issues.

Heuristic Evaluation Checklist

14/36

Conduct a heuristic evaluation of [product/feature/page]. Here is a description of the interface: [Describe the interface in detail, or list key screens and elements] Evaluate against Nielsen's 10 Usability Heuristics: (1) Visibility of system status, (2) Match between system and real world, (3) User control and freedom, (4) Consistency and standards, (5) Error prevention, (6) Recognition rather than recall, (7) Flexibility and efficiency of use, (8) Aesthetic and minimalist design, (9) Help users recognize and recover from errors, (10) Help and documentation. For each heuristic: rate compliance (pass, partial, fail), describe specific violations found, provide a screenshot reference or screen name where the violation occurs, and recommend a concrete fix. Prioritize violations by severity.

Performs a systematic expert review using the industry-standard 10 heuristic framework with concrete findings and fixes.

💡

Pro tip: Heuristic evaluations work best with 3-5 evaluators. Each evaluator catches different issues, and the overlap validates the most critical problems.

Accessibility Usability Test Plan

15/36

I need to test the accessibility of [product/feature] with users who have disabilities. The assistive technologies I need to cover: [screen reader, keyboard-only, voice control, magnification, switch device]. Create an accessibility-focused usability test plan: (1) recruitment criteria — specify disability types, assistive technology experience levels, and sample size per group, (2) environment setup — what to configure before each session for each assistive technology, (3) 4-5 task scenarios written to be technology-agnostic (do not assume mouse or visual interaction), (4) observation checklist — what to watch for with each assistive technology (focus order, announcements, timing, gestures), (5) post-task questions specific to accessibility experience, and (6) a framework for reporting findings that maps issues to WCAG 2.2 success criteria. Include tips for moderating sessions with participants who have different communication needs.

Plans accessibility testing sessions tailored to different assistive technologies and disability types.

💡

Pro tip: Recruit actual assistive technology users, not colleagues pretending to use a screen reader. Experienced screen reader users navigate completely differently than sighted testers with VoiceOver turned on.

Personas & Journey Maps

5 prompts

Research-Backed Persona Builder

16/36

I need to create a UX persona for [product]. Here is my user research data: [Paste research findings — interview quotes, survey results, behavioral data, demographics] Build a research-backed persona that includes: (1) a realistic name, photo description, and demographic summary, (2) a one-sentence persona statement: "[Name] is a [role] who needs [need] because [motivation]", (3) goals — 3 primary goals (functional) and 2 emotional/social goals, (4) frustrations — 4-5 pain points supported by direct research quotes, (5) behaviors — typical workflow, tools used, frequency of use, (6) a "day in the life" scenario showing where [product] fits, (7) technology comfort level and preferred devices, and (8) decision-making factors — what influences their product choices. Do not invent characteristics. Everything must trace back to the research data I provided.

Creates a data-grounded persona that traces every characteristic back to actual research findings.

💡

Pro tip: The best personas include direct quotes from real users. If you cannot trace a persona trait back to research data, it is a guess dressed up as a persona.

Customer Journey Map

17/36

Map the end-to-end customer journey for [persona/user type] using [product/service]. The journey starts at [awareness trigger] and ends at [loyalty/advocacy stage]. For each stage of the journey: (1) define the stage name and what triggers the transition to the next stage, (2) list the user's actions and touchpoints (every interaction with your brand), (3) capture their thoughts and questions at this stage, (4) describe their emotional state (frustrated, excited, confused, confident) with a sentiment score, (5) identify pain points and friction, (6) list opportunities for improvement, and (7) note which team owns this touchpoint (marketing, product, support, sales). Create a visual-ready format with stages as columns and the dimensions above as rows. Highlight the top 3 "moments of truth" where the experience makes or breaks the relationship.

Creates a comprehensive journey map that captures actions, emotions, pain points, and improvement opportunities at every touchpoint.

💡

Pro tip: The most valuable part of a journey map is the emotional curve. When you see a drop from confident to frustrated, you have found the exact moment your product is failing the user.

Empathy Map Generator

18/36

Create an empathy map for [user type/persona] in the context of [task or scenario they are trying to accomplish]. Based on this research: [Paste relevant interview quotes, observations, survey responses] Fill in all four quadrants: (1) Says — direct quotes from users during interviews or support tickets, (2) Thinks — thoughts they might not say out loud, inferred from behavior and hesitations, (3) Does — observable actions and behaviors, (4) Feels — emotional responses inferred from tone, body language, and word choice. Then add: (5) Pain points — the biggest frustrations revealed across all four quadrants, (6) Gains — what success looks like for this user, and (7) a design implication for each pain point — one sentence starting with "The product should..." Use only data from the provided research. Flag any quadrant where you are making inferences rather than using direct evidence.

Builds an empathy map that separates observed evidence from inferred insights and translates both into design actions.

💡

Pro tip: The gap between "Says" and "Does" is where the most valuable insights live. Users often say they want one thing but behave differently — design for what they do, not what they say.

Scenario and Use Case Documentation

19/36

I need to document the key scenarios and use cases for [feature/product]. The primary users are: [list user types]. Create: (1) 5 primary scenarios covering the most common use cases, each written as a narrative: "[User] is in [context]. They want to [goal]. They [actions]. The system [response]. The outcome is [result]." (2) 3 edge case scenarios covering unusual but important situations, (3) 1 error scenario where something goes wrong and the user needs to recover, (4) for each scenario, specify the preconditions (what must be true before the scenario starts), the trigger (what initiates the scenario), and the postconditions (what is true after successful completion), (5) map each scenario to a specific user type/persona, and (6) prioritize scenarios by frequency and business impact for development sequencing.

Documents scenarios and use cases in a narrative format that helps designers and developers understand the full context of use.

💡

Pro tip: Write scenarios from the user perspective, not the system perspective. "Maria opens the app on her commute" is more useful than "The system displays the home screen."

Anti-Persona Definition

20/36

I need to define anti-personas for [product] — user types we are explicitly NOT designing for. This helps prevent scope creep and feature bloat. Based on our product positioning and target audience: Target audience: [describe ideal users] Product focus: [describe core value proposition] Common feature requests we say no to: [list] Create 3 anti-personas: for each one provide (1) a name and brief description, (2) why they are attracted to the product (what brings them in), (3) why the product is not right for them (the fundamental mismatch), (4) the features they would request that would dilute the product if we built them, (5) the cost of trying to serve them (engineering resources, support burden, UX complexity), and (6) where to redirect them instead (competitor or alternative solution that fits their needs). Include a one-sentence test: "If a feature request primarily serves [anti-persona name], we should say no."

Defines user types the product should not serve to protect focus and prevent feature creep driven by the wrong audience.

💡

Pro tip: Anti-personas are as important as personas. Every feature built for the wrong user makes the product slightly worse for the right user.

Information Architecture

5 prompts

Card Sorting Analysis

21/36

I ran a [open/closed/hybrid] card sort with [number] participants. The cards represented: [list content items or features]. Here are the grouping results: [Paste results — how participants grouped the cards, what they named the groups] Analyze the results: (1) identify the strongest clusters — items that were grouped together by 70%+ of participants, (2) identify "homeless" items that participants placed inconsistently, (3) calculate a similarity matrix showing which items are most often paired, (4) suggest an optimal navigation structure based on the clustering, (5) recommend labels for each category based on participant-generated names (look for the most common and clearest labels), (6) flag any items that should appear in multiple categories (consider cross-links), and (7) compare the results to the current navigation and highlight discrepancies.

Interprets card sort data to derive a user-validated navigation structure backed by similarity analysis.

💡

Pro tip: If an item appears in different groups across participants, it probably belongs in multiple places. Use cross-links or secondary navigation rather than forcing it into one category.

Site Map and Navigation Design

22/36

Design the information architecture for [product/website]. The content includes: [list all major content types, features, and pages]. The primary user tasks are: [list top 5 user goals]. Create: (1) a hierarchical site map showing all pages organized into no more than 7 top-level categories (Miller's Law), (2) for each category, list the sub-pages up to 3 levels deep, (3) define the primary navigation (always visible), secondary navigation (contextual), and utility navigation (account, settings, help), (4) specify which pages link to which other pages (cross-links), (5) design the breadcrumb structure, (6) recommend a search strategy — what should be searchable, what filters to offer, and (7) test the architecture against the top 5 user tasks — can each task be completed in 3 clicks or fewer?

Creates a complete information architecture with navigation hierarchy, cross-links, and task-based validation.

💡

Pro tip: The three-click rule is not about counting clicks — it is about cognitive load. Three confident clicks beat one click where the user has to read 40 options.

Content Audit and Gap Analysis

23/36

I need to audit the content on [product/site]. Here is the current structure: [Paste sitemap, page list, or navigation structure] Conduct a content audit: (1) catalog every page or content item with its purpose, audience, and current placement, (2) label each item as: keep as-is, update, merge, archive, or delete with reasoning, (3) identify content gaps — topics or tasks that users expect but that do not exist, (4) flag redundant content — pages or sections that overlap and should be consolidated, (5) check content against the top user tasks — is there a clear content path for each task, (6) assess findability — are items labeled clearly enough that users would look for them where they are placed, and (7) prioritize actions into a 30-60-90 day content improvement plan.

Maps all existing content, identifies gaps and redundancies, and creates a prioritized improvement roadmap.

💡

Pro tip: The most dangerous content is the outdated page that ranks well in search. Users find it, trust it, and make decisions based on wrong information. Audit for accuracy, not just existence.

Search UX Specification

24/36

Design the search experience for [product/site]. The searchable content includes: [describe content types — products, articles, users, settings, etc.]. Total items: [approximate number]. Common search queries: [list example searches]. Create a complete search UX specification: (1) search input design — placement, placeholder text, auto-suggestions behavior, (2) search results page layout — result card design for each content type, sort options, filters, (3) define the ranking logic — what signals determine result order (relevance, recency, popularity), (4) empty state — what happens when no results are found (suggestions, did-you-mean, popular searches), (5) type-ahead behavior — what appears as the user types and when, (6) filter and facet design — which filters to show, single vs multi-select, and (7) search analytics — what to track to improve search quality over time.

Specifies a complete search experience from input to results to analytics, covering both happy and empty states.

💡

Pro tip: Your search empty state is a UX emergency room. Users who search and find nothing are seconds from leaving. Always provide alternative paths — suggestions, categories, or popular items.

Taxonomy and Labeling System

25/36

I need to create a taxonomy for [product/content type — e.g., product categories, help articles, course topics, recipe tags]. The items to categorize: [list or describe the content]. The users are: [describe]. Create: (1) a hierarchical taxonomy with 2-3 levels of categories, (2) clear naming conventions — explain the labeling rules (noun vs verb, singular vs plural, user language vs internal jargon), (3) definitions for each category that eliminate ambiguity about where items belong, (4) cross-reference rules — when should an item appear in multiple categories, (5) governance rules — how to decide where new items go, who approves new categories, (6) a controlled vocabulary of approved terms to prevent synonym proliferation, and (7) a testing method — 5 sample items with the rationale for their placement to train other team members.

Builds a governed taxonomy with clear categories, naming rules, and decision criteria for consistent content organization.

💡

Pro tip: Use the words your users use, not the words your team uses internally. "Help Center" beats "Knowledge Base" if your users call it help.

Go from copy-pasting to actually mastering AI.

AI Academy: 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ other tools. New tutorials added every week.

Start Your Free Trial

Accessibility

5 prompts

WCAG 2.2 Compliance Audit

26/36

Audit the following interface for WCAG 2.2 Level AA compliance: [Describe the interface — page type, key elements, interactive components, forms, media, navigation] For each WCAG principle (Perceivable, Operable, Understandable, Robust), check: (1) list every applicable success criterion, (2) evaluate pass, partial, or fail for each, (3) for failures, describe the specific violation and which users it affects, (4) provide the exact fix with code or design changes, (5) rate the fix effort (quick, medium, major), and (6) prioritize fixes by impact on users with disabilities. Pay special attention to: color contrast ratios, keyboard navigation order, screen reader announcements, form label associations, error identification, and focus management. Include any Level AAA criteria that are easy wins.

Performs a thorough WCAG 2.2 audit organized by principle with specific violations, affected users, and prioritized fixes.

💡

Pro tip: Start with keyboard navigation testing — it catches the most issues in the least time. If you cannot complete every task with keyboard alone, screen reader users definitely cannot either.

Accessible Form Design

27/36

I need to design an accessible form for [purpose — registration, checkout, application, contact]. The form fields are: [list all fields with types]. Design an accessible form specification: (1) proper label placement and association (visible labels, not just placeholders), (2) helper text and instruction placement for each complex field, (3) required field indication that does not rely on color alone, (4) error message design — inline vs summary, timing (real-time vs on submit), and screen reader announcement strategy using aria-live, (5) focus management — tab order, focus trapping in modals, focus return after dismissals, (6) keyboard shortcuts for common actions, (7) touch target sizes for mobile (minimum 44x44px), and (8) auto-fill support using correct autocomplete attributes. Include the HTML structure with ARIA attributes for one complex field as an example.

Creates an accessibility-first form specification covering labels, errors, focus management, and assistive technology support.

💡

Pro tip: Never use placeholder text as a label. When the user starts typing, the placeholder disappears and they cannot remember what the field is asking for.

Screen Reader Experience Optimization

28/36

I need to optimize the screen reader experience for [page/feature]. The interface contains: [list all interactive elements, content sections, dynamic content areas]. Write a complete screen reader optimization plan: (1) define the heading hierarchy (h1-h6) and landmark regions (nav, main, aside, footer), (2) specify ARIA roles, labels, and descriptions for each interactive component, (3) define the reading order and whether it matches the visual order, (4) handle dynamic content updates — which use aria-live polite vs assertive and which use focus management, (5) write alt text for all images (decorative images get alt=""), (6) specify how expandable/collapsible sections announce their state, (7) define skip links and how they work, and (8) test the complete flow by writing out what a screen reader would announce step by step as a user navigates from top to bottom.

Creates a detailed screen reader optimization plan with heading hierarchy, ARIA attributes, and a step-by-step announcement walkthrough.

💡

Pro tip: The best way to test screen reader experience is to close your eyes and listen. If you cannot complete the task by ear alone, rewrite the ARIA labels.

Inclusive Design Review

29/36

Review the following design for inclusivity across a range of user needs: [Describe the interface design — layout, colors, typography, interactions, content] Assess against these dimensions: (1) visual accessibility — contrast ratios, text sizes, color dependence, dark mode support, (2) motor accessibility — target sizes, spacing between interactive elements, drag-and-drop alternatives, single-hand usability, (3) cognitive accessibility — reading level, information density, cognitive load, clear error recovery, progress indication, (4) situational limitations — bright sunlight, noisy environment, one-handed use, slow connection, (5) language and cultural inclusivity — jargon, idioms, date/time formats, name field flexibility, (6) age inclusivity — patterns that challenge older or very young users. For each dimension, rate the current design (good, needs work, poor) and provide specific improvements.

Reviews a design for inclusivity across visual, motor, cognitive, situational, cultural, and age-related dimensions.

💡

Pro tip: Designing for permanent disability helps everyone. Captions help deaf users and people in noisy airports. Large touch targets help motor-impaired users and people on bumpy trains.

Accessible Color System Generator

30/36

I need to create an accessible color system for [product/brand]. Current brand colors: [list hex codes and their uses — primary, secondary, accent, background, text]. Generate an accessible color system: (1) test all color combinations against WCAG AA (4.5:1 for normal text, 3:1 for large text) and AAA standards, (2) for any failing combinations, suggest adjusted colors that maintain brand feel while meeting contrast requirements, (3) create a color usage matrix showing which colors can be safely paired, (4) design a semantic color system (success, warning, error, info) that is accessible, (5) ensure every color-coded element also has a non-color indicator (icon, pattern, text), (6) provide dark mode equivalents that maintain the same contrast ratios, and (7) test the system for the three main types of color blindness (protanopia, deuteranopia, tritanopia) and flag any problematic combinations.

Builds a fully accessible color system with contrast testing, colorblind simulation, and dark mode equivalents.

💡

Pro tip: Never rely on color alone to convey meaning. Red and green look identical to 8 percent of men. Always pair color with a shape, icon, or text label.

Frequently Asked Questions

No. ChatGPT can help you prepare for research (write interview scripts, design surveys, structure analysis), but it cannot replace talking to real users. It generates plausible user responses based on patterns in its training data, not actual behavior from your specific users. Use it to be better prepared for research, analyze data faster, and document findings more clearly — but always validate with real people.
Start with the User Research and Personas categories. Understanding your users is the foundation of all UX work. Then move to Information Architecture to structure your product logically, and Wireframing to translate that structure into screens. Skip Usability Testing until you have something to test. The prompts include enough context and explanation that you can learn the methodology as you use them.
These prompts help you identify and fix accessibility issues, but they are not a substitute for a professional accessibility audit. WCAG compliance requires testing with real assistive technology users and often needs specialist expertise for complex interfaces. Use these prompts as a starting point to catch the most common issues and build accessibility awareness into your process, then bring in specialists for formal compliance certification.
Use the severity-frequency matrix from the Usability Test Results Analysis prompt. Plot each issue on two axes: how many users are affected (frequency) and how badly it impacts their experience (severity). Fix high-frequency, high-severity issues first. Issues that affect many users but are minor annoyances come second. Rare but catastrophic issues come third. Rare and minor issues go on the backlog. This gives you the highest return on design effort.
ChatGPT generates detailed wireframe specifications, component descriptions, and layout instructions, but it does not produce visual wireframes. Use the specifications it generates as input for design tools like Figma, Sketch, or even pen and paper. The value is in the thinking — content hierarchy, interaction patterns, edge cases — not the pixels. Many designers find that a well-written specification is faster to implement than a vague sketch.

Prompts are the starting line. Tutorials are the finish.

A growing library of 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ AI tools. New tutorials added every week.

14-day free trial. Cancel anytime.