Claude Prompts for Hiring Decisions That Hold Up
20 copy-paste Claude prompts for hiring: interview design, structured evaluation, debrief facilitation, calibration prep, reference checks, and offer construction. Long-context Claude shines on multi-interview synthesis.
Interview Design
4 promptsInterview Loop Design
1/20Design interview loop for [role + level]. Output: stages (recruiter screen, hiring manager, peer panel, leadership, exec final), competencies tested per stage, interview format per stage (behavioral, technical, case, work sample), time per stage, who interviews, what each stage filters for. No redundancy.
Designs interview loops.
Pro tip: Interview loops with 3 panels asking same behavioral questions = exhausted candidate + no new signal. Each stage tests distinct competency = efficient + thorough.
Behavioral Question Generator
2/20Generate behavioral interview questions for [competency]. Output: 5 STAR-format questions, what good answer reveals, what concerning answer reveals, 2 follow-up probes per question to verify depth, alternative phrasings if candidate doesn't engage. Specific to competency, not generic.
Generates STAR behavioral questions.
Pro tip: Specific behavioral questions = specific signal. "Tell me about leadership" = vague. "Tell me about a time you led without authority + how you got buy-in" = competency-specific = useful answer.
Work Sample Design
3/20Design work sample for [role]. Constraints: under 4 hours candidate time, simulates real job, evaluatable against rubric. Output: brief, materials provided, time guidance, deliverables expected, rubric for evaluation. Test the actual job, not unrelated puzzles.
Designs work sample assignments.
Pro tip: 4 hours = ethical max. Real-job simulation > leetcode trick. Rubric pre-built = consistent evaluation. Common mistake: design assignment, evaluate gut-feel = bias.
Technical Interview Question Calibration
4/20[Paste technical interview questions]. Calibrate difficulty: appropriate for [level], same difficulty across candidates? Are questions known via leak? Are they real-job-relevant or trivia? What signal does each question give vs what we need to know? Recommend swaps.
Calibrates technical interviews.
Pro tip: Same questions for 6 months = leaked. Refresh quarterly. Trivia questions test memorization, not capability. Real problems > textbook problems for actual signal.
XML tags are just the start. Learn the full Claude workflow.
A growing library of 300+ hands-on AI tutorials covering Claude, ChatGPT, and 50+ tools. New tutorials added every week.
Candidate Evaluation
4 promptsInterview Notes to Scorecard
5/20[Paste interview notes]. Convert to structured scorecard. Output: per competency tested, score 1-4 with anchor description, evidence from interview supporting score (specific quotes/moments), gaps still to assess, overall recommendation lean. Score before debrief, not during — order matters.
Converts interview notes to scorecards.
Pro tip: Scoring during debrief = group bias. Individual scoring before debrief = independent signal. Compare scores in debrief = real calibration. Order is the bias guardrail.
Resume + Background Pre-Screen
6/20[Paste resume]. Pre-screen for [role]. Output: relevant experience score 1-5, gaps to probe in interview, claims to verify (specific titles, scope, outcomes), interview format that'd signal best, what about this candidate is unusual (good or concerning).
Pre-screens resumes systematically.
Pro tip: Pre-screen by gut = bias. Pre-screen by competency rubric = signal. Specific gaps to probe = focused interview, not exploration. Time-saver + bias-reducer.
Reference Check Synthesis
7/20[Paste reference call notes from 2-3 references]. Synthesize patterns: confirmed strengths (multiple references), patterns of concern (multiple references), one-off feedback (single reference, possible bias), what surprises me, what aligns with interview signals.
Synthesizes reference check patterns.
Pro tip: Single reference = noise. Multiple references = signal. Patterns across references = deepest insight. Generic "great employee" answers from all = lazy referencing; specific consistent feedback = real signal.
Take-Home Evaluation
8/20[Paste take-home submission]. Evaluate against rubric: technical correctness, code/work quality, judgment shown, communication of approach, what they prioritized + cut, time management. Score 1-4 per dimension. What I'd want to discuss in follow-up.
Evaluates take-home submissions.
Pro tip: Take-home eval = code quality + judgment shown. Working solution with poor code = junior thinking. Elegant solution to wrong problem = direction problems. Both matter.
Debriefs + Calibration
4 promptsDebrief Facilitation Script
9/20I'm facilitating debrief for [candidate]. 4 interviewers, scores submitted independently. Output: agenda (15 min individual + 30 min discussion), order of speaking (lowest score first to surface concerns), specific concerns to dig into, decision call protocol, who has tiebreaker.
Facilitates hiring debriefs.
Pro tip: Lowest score first = concerns surfaced early; not buried by groupthink. Highest score first = others suppress doubts. Order matters more than people realize.
Calibration Across Candidates
10/20[Paste 3-5 candidate scorecards]. Calibrate: are scores consistent across candidates? Are interviewers stricter/lenient consistently? Where do scores diverge significantly + why? Which candidate is strongest holistically + on what dimensions? Help me calibrate before final decision.
Calibrates scoring across candidates.
Pro tip: Same interviewer scoring 3 candidates 4-4-4 = no calibration. 4-3-2 = signal. Knowing rater patterns + adjusting = better hiring than naive averaging.
Tie-Breaker Decision
11/202 candidates remain, both qualified. [Paste both scorecards]. Help me decide: what each does better than the other, where each is weaker, which fits role today + which fits role tomorrow, which fits team chemistry, which is risk-taking call vs safe call. Force a choice.
Makes final tie-breaker decisions.
Pro tip: Two-good-candidates problem = force the choice on specific dimension that matters most. "Both great" = no signal. "X is better at Y because" = decision. Neither is wrong; one is more right for this hire.
No-Hire Decision Articulation
12/20I'm saying no to [candidate]. Help me articulate why specifically: what gap (skill / experience / fit), what alternative I needed but didn't see, what would change my decision (if anything), what feedback I could share with candidate. Decision should be defensible internally.
Articulates no-hire decisions.
Pro tip: Vague no-hire ("not the right fit") = bias risk + unhelpful for candidate. Specific gap articulated = bias-checked + credible internally + defensible if challenged.
These prompts give you the what. Tutorials give you the why.
Learn when to use extended thinking, how to build Claude Projects, and workflows that compound. 300+ tutorials and growing.
Offer + Closing
4 promptsCompensation Offer Build
13/20Build offer for [candidate]. Inputs: their current comp, market data, internal equity, level. Output: base, equity, bonus, signing bonus if any, benefits worth $, total comp transparency, where I have flexibility, opening offer vs final-room.
Builds compensation offers.
Pro tip: Opening offer = give yourself room. Tight offer = no negotiation room = candidate either accepts (good for us, bad signal for them) or pushes back (we have nothing left). Build slack.
Offer Letter + Pitch
14/20Offer letter + closing pitch for [candidate]. Personalized opener referencing what excited them, role pitch (one paragraph, why this role specifically), comp summary, signing logistics, expected response timeline. Warm but professional. Final piece of selling — they're not in yet.
Writes offer letters with closing pitch.
Pro tip: Offer = legal doc + sales pitch. Pure legal = cold; pure sales = unprofessional. Both = candidate receives "we want you" in tone they want to feel. Most offer letters fail at the pitch.
Counter-Offer Response
15/20Candidate received counter-offer from current employer. Help me respond: validate the choice they're facing, articulate why our offer + opportunity is the right move, address what counter-offer is actually about (money, fear, sunk cost), give them permission to decide what's right.
Handles counter-offers.
Pro tip: Counter-offer = current employer realizing they should've treated them better all along. 70% who accept counter-offer leave within a year anyway. Help candidate see the pattern.
Hiring Process Post-Mortem
16/20Hire just landed for [role]. Help me run post-mortem: what worked in our process, what slowed us down, where candidate feedback flagged friction, where competitor would have moved faster, what to change next time. Process improvement compound across hires.
Runs hiring process post-mortems.
Pro tip: Hires-as-data = process improves. Each hire teaches: stage too long, question too vague, debrief inconsistent. Process compounds; first hire = learning curve, tenth hire = polished.
Frequently Asked Questions
Prompts are the starting line. Tutorials are the finish.
A growing library of 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ AI tools. New tutorials added every week.
7-day free trial. Cancel anytime.