Claude Prompts for Job Descriptions That Attract Right Candidates
20 copy-paste Claude prompts for writing JDs that filter rather than describe — level-calibrated, scope-specific, inclusive, and convertible into interview rubrics. Claude's structured output strength shines here.
JD Writing
4 promptsJD from Hiring Manager Brain Dump
1/20[Paste hiring manager's rough notes about the role]. Write a structured JD. Output as XML: <role-summary>, <responsibilities-day-to-day>, <responsibilities-quarter>, <required-experience>, <preferred-experience>, <success-after-90-days>, <success-after-1-year>, <reporting-structure>, <compensation-range>, <work-style>, <growth-path>. Specific over generic.
Converts hiring manager notes into structured JDs.
Pro tip: XML output = easier to slice into ATS fields, internal docs, public posting. Different audiences see different sections without copy-paste pain.
90-Day + 1-Year Success Definition
2/20Define success at 90 days + 1 year for [role]. Output: 90-day measurable outcomes (specific deliverables / relationships built / understanding gained), 1-year measurable outcomes (impact, capabilities developed, scope expansion). Outcomes-based not activities-based — what changes in the world if they succeed.
Writes success definitions for JDs.
Pro tip: Activities-based ("attended 50 1:1s") vs outcomes-based ("team Net Promoter Score improved 15 pts") differs starkly. Outcomes attract operators; activities attract clock-watchers. Choose your filter.
Differentiate Required vs Preferred
3/20[Paste my current required + preferred lists]. Critique: which "required" items are actually preferred? Which "preferred" items are actually required if I'm honest? What am I missing? Goal: tight required list (otherwise I lose qualified candidates), generous preferred list (signal range without filtering).
Calibrates required vs preferred.
Pro tip: Inflated required lists = women + underrepresented apply only if they meet 100%; men apply at 60% match. Tight required = wider candidate pool with no quality drop.
Scope-Calibrated Description
4/20I'm hiring [role] at [company stage / size]. The same role is different at startup vs scale-up vs enterprise. Help me write JD that signals scope correctly: budget responsibility, team size, decision authority, ambiguity tolerance, generalist-vs-specialist orientation. Filter mismatched candidates upfront.
Calibrates JD to company scope.
Pro tip: Senior person from BigCo at startup often fails — wants infrastructure + specialists. Senior person from startup at BigCo often fails — wants speed + autonomy. JD that signals scope filters mismatched applicants.
XML tags are just the start. Learn the full Claude workflow.
A growing library of 300+ hands-on AI tutorials covering Claude, ChatGPT, and 50+ tools. New tutorials added every week.
Inclusive Language + Bias Audit
3 promptsInclusive Language Audit
5/20[Paste JD]. Audit for biased or exclusionary language. Flag: gendered terms (rockstar, ninja, guru), age-coded language (digital native, energetic), ableist phrasing (must stand for X hours unless essential), academic-credentialism unnecessary for role, jargon excluding outsiders, unrealistic year requirements. Suggest replacements.
Audits JDs for inclusive language.
Pro tip: Tools like Textio quantify this. Claude can do similar scan + suggest replacements. Cleaner JDs = wider applicant pools = better hires (and legally cleaner).
Years-of-Experience Reality Check
6/20JD requires [X years experience]. Pressure-test: is this real (skill takes that long) or copy-paste from old JDs? What can substitute (e.g., 3 years + advanced cert = 5 years)? Drop unrealistic requirements that gate-keep without justification. Junior candidates with right skills > 7-year mediocre.
Pressure-tests years-of-experience requirements.
Pro tip: Most "5+ years" requirements = unjustified. Specific skill mastery + scope handled > arbitrary year count. Ask "would I hire someone with 3 yrs but X demonstrated?" — if yes, drop the year requirement.
Salary Transparency Calibration
7/20Help me draft the comp range for [role] at [company]. Considerations: market data, internal equity, candidate seek-rate at different bands. Output: range to publish, rationale for top vs bottom of range, what variable comp + equity I should mention, language framing the range honestly.
Calibrates published salary ranges.
Pro tip: Wide ranges ($90K-$180K) signal mistrust. Tight ranges signal honesty + filter mismatched candidates. Pay transparency laws are leveling the playing field anyway. Be early > be forced.
JD-to-Rubric Conversion
3 promptsJD → Interview Rubric
8/20[Paste JD]. Convert into structured interview rubric: required competencies, behaviors that demonstrate each at level, sample interview questions per competency, what passing answer looks like, what failing answer looks like, scoring scale (1-4) with anchors. Exportable as XML for ATS.
Converts JD into interview rubric.
Pro tip: Rubric created at JD-time = consistent hiring. Rubric improvised at interview-time = bias. Same JD writer should write rubric — alignment built-in.
Take-Home Assignment from JD
9/20[Paste JD]. Design a take-home assignment that tests the actual job. Constraints: under 4 hours of candidate time, simulates real work (not trick puzzle), evaluatable against rubric, doesn't exploit free labor. Output assignment brief, evaluation criteria, time guidance.
Designs take-home assignments from JDs.
Pro tip: 4 hours = ethical limit. Anything longer = exploitation. "Real work simulation" > "leetcode puzzle." Test the actual job, not unrelated CS trivia.
Reference Check Questions from JD
10/20[Paste JD]. Generate reference check questions tailored to this role. Output: 8-10 specific questions on competencies that JD lists, calibration questions ("what stretch did you see them grow into?"), red flag questions ("what would have been their ideal next role you didn't have to offer them?"). Beyond verifying employment.
Generates JD-tailored reference questions.
Pro tip: Generic reference check = "are they good?" = useless answers. Specific question on the JD's competencies = useful signal. Tailored questions = the difference between confirming a hire + catching a bad hire.
These prompts give you the what. Tutorials give you the why.
Learn when to use extended thinking, how to build Claude Projects, and workflows that compound. 300+ tutorials and growing.
Internal + External Posting
4 promptsExternal Job Posting Adaptation
11/20[Paste internal JD]. Adapt for external posting. Different from internal: more company context, more "why this role matters," compelling opener, role intrigue + transparency, less internal jargon, application instructions clear, EEO statement. Internal docs assume context external candidates don't have.
Adapts JDs for external job postings.
Pro tip: Internal JDs written for HR + manager. External JDs written to compete for attention. Same role, different document. Most companies confusingly post internal version externally.
LinkedIn Job Post Variant
12/20[Paste external JD]. Variant for LinkedIn: shorter (150-250 words), conversational opener, problem-led framing ("we have X problem; you solve it"), 3 bullets max for requirements, single clear CTA. LinkedIn scrolls past 5 pages of corporate JD.
Writes LinkedIn-optimized job posts.
Pro tip: LinkedIn = scroll. 5-page JD scrolled past. 250-word problem-framed post = clicked + applied. Same role, different attention pattern, different copy.
Internal Posting + Promotion Path
13/20Internal posting for [role]. Existing employees can apply. Output: how internal candidates are evaluated (vs external), encouraged candidate profiles, what current managers should support, application timeline + transparency, who's on hiring committee, how non-selected internal candidates will be supported afterward.
Writes internal job postings.
Pro tip: Internal applicants who don't get the role = often quit if not handled well. Transparency + support afterward = retention. Treating them like outside candidates with no follow-up = surprise resignations.
JD Sunset Check
14/20[Paste JD that's been open 60+ days]. Help me assess: is the JD itself the problem? Audit: realistic for talent market, comp competitive, requirements over-engineered, role narrative compelling, company perception. Sometimes the JD is broken; opening longer doesn't fix it.
Audits JDs that won't close.
Pro tip: Roles open 90+ days = JD-fix > more sourcing. Re-write often unlocks pipeline. "We can't find anyone" = often "we wrote a unicorn JD that doesn't exist."
Frequently Asked Questions
Prompts are the starting line. Tutorials are the finish.
A growing library of 300+ hands-on tutorials on ChatGPT, Claude, Midjourney, and 50+ AI tools. New tutorials added every week.
7-day free trial. Cancel anytime.