# Hiring & Recruiting

> JD writing, resume triage, outbound talent sourcing, take-home grading, and reference calls.

Hiring is where most operators bleed time. Reading 200 resumes for the 5 that matter, scheduling 12 calls for the 2 worth a second round, writing the same role description three times because the first two attracted the wrong people. Agents collapse this work to the part that actually requires human judgment: deciding who you want to work with. A few principles up front: Filter mercilessly at the top. The 200 → 5 cut is the highest-leverage place an agent works. Get that loop right and the rest is easy. Score against a written rubric. Not vibes. A rubric your agent can apply consistently beats your gut on day one. Voice and culture come last in the pipeline, not first. Skills filter early, judgmentbased screens after. Pay attention to the "almost" pile. The candidate who scored 7/10 today is often a 9/10 hire for the next role. Keep the data. This section is niche-neutral but flags one specialized lane — offshore VA hiring — that has its own playbook references in the field guide section. Use the general patterns here for any hire; tap that lane when you're specifically optimizing for offshore admin/CS/research roles.

## 1. Job Description Writing

### Tip 1.1 — JD writer that knows your company

**What it does:** You describe the role in a paragraph. Your agent — already loaded with your company context (mission, voice, comp band, existing JDs) — drafts a full job description that sounds like your company, gets technical right, and avoids the corporate JD slop everyone else writes. Why it wins: Most JDs sound like they were written by HR for a job posting site. Yours should sound like your team. JDs are also one of the most copy-pasted corporate documents in the world — an agent trained on your actual voice produces something genuinely differentiated. Tools: Your team voice skill, prior JDs as reference, your comp framework doc, the platform you'll post to (LinkedIn, Wellfound, Workable, your own site). How to wire it:

## 1. Build a team-voice skill (same pattern as the per-platform voice skills in Marketing)

trained on your existing JDs, your About page, your team-facing internal docs. 2. Maintain a comp-bands.md and a role-rubrics/ folder with one file per role archetype. 3. When you brief the agent on a new role, it asks 5-10 clarifying questions (must-have skills, nice-to-haves, what success looks like at 30/90/180 days, comp band). 4. Drafts the JD with your structure: opener, mission tie-in, what you'll do, what you've done, comp transparency, how to apply. 5. You edit and post. Example prompt to your agent: Draft a JD for a [role]. Load my team-voice skill and pull from prior-jds/ for structure. Ask me 5-10 clarifying questions before drafting: must-have skills, nice-tohaves, success criteria at 30/90/180 days, comp band, in-office or remote, who they report to. Then draft a JD that uses my voice, includes a real comp band, and avoids buzzwords like "rockstar" or "ninja." Save to jds/<role-slug>-<date>.md . Watch out for: Don't bury the comp band. Top candidates filter on comp transparency. Skills lists that are 27 bullets long get nobody applying. 5 must-haves, 3 nice-to-haves, done. Inclusive language matters. Have the agent pass the JD through an inclusive-language check (Textio-style rubric or a simple checklist). Skill file: voice-skill-template, outreach-drafter

## 2. Inbound Resume Triage

### Tip 2.1 — Resume triage against rubric

**What it does:** Every inbound application gets parsed (resume + cover letter + answers to your screening questions), scored against the role's rubric, and sorted into Strong / Maybe / Pass piles. Each Strong gets a 3-bullet summary so you can decide which to advance in 15 seconds per candidate. Why it wins: Recruiters charge $20k+ to do this. Your agent does it for cents. And more importantly, it does it consistently — no "I'm tired by candidate 47" effect. Tools: Your ATS or application destination (Workable, Ashby, Greenhouse, or a Notion form + Drive), PDF/document parsers, your role rubric.

**How to wire it:** 1. Every new application lands in a known place (ATS or an inbox). 2. Per role, define a rubric in role-rubrics/<role>.md : must-haves (3-5), nice-to-haves (3-5), red flags (2-3), tie-breakers. 3. Agent parses the resume + answers to screening questions. 4. Scores against each rubric item: 0/1 for must-haves, 0-2 for nice-to-haves, immediate pass for red flags. 5. Sorts into Strong / Maybe / Pass. Strong candidates get a 3-bullet summary plus a 1-line "why this is interesting" note. Example prompt to your agent: For role [slug], watch new applications in the ATS. For each: parse the resume and screening answers. Score against role-rubrics/<slug>.md — 0/1 on must-haves, 02 on nice-to-haves, immediate pass on red flags. Output: Strong (all must-haves + ≥ half nice-to-haves), Maybe (most must-haves + some nice-to-haves), Pass (everyone else). For each Strong, write a 3-bullet summary and a 1-line "why interesting." Send me a daily digest at 6pm with the day's Strong and Maybe lists. Watch out for: The rubric is the whole game. If it's vague, the scoring is vague. Iterate on the rubric after each hiring round. Don't auto-reject. Pass pile gets a polite rejection email later, but a human eye is the final filter for borderline calls. LLM resume parsing can hallucinate. Always have the agent quote the exact resume line for any score it gives. Skill file: pre-call-research, scope-analyzer (same scoring pattern)

### Tip 2.2 — Screening question generation per role

**What it does:** When you post a role, your agent generates 3-5 screening questions tailored to the rubric. The questions filter for actual signal — they're hard to fake, reveal thinking, and let you score candidates without a call. Why it wins: Generic screening questions ("why do you want this job?") get generic answers and tell you nothing. Custom questions tied to the rubric pre-filter the field by 4060% before you read a single resume. Tools: Your rubric, your team voice. How to wire it:

1. For each role, the agent generates 3-5 screening questions tied to specific rubric items.
2. At least one question is short-answer (filters writers from non-writers).
3. At least one question is scenario-based ("a customer reports X, walk me through how you'd diagnose").
4. Questions get bundled with the application form on the ATS or job board.
5. Agent scores answers as part of Tip 2.1. Example prompt to your agent: For role [slug], read the rubric at role-rubrics/<slug>.md . Generate 3-5 screening questions: at least one short-answer essay (200 words), at least one scenario-based, the rest can be multiple choice or short text. Each question should tie to a specific must-have or nice-to-have. Output a clean list I can paste into the ATS application form. Watch out for: Don't ask anything you wouldn't be happy to be asked. If the screen takes 30 minutes, top candidates won't bother. Trick questions optimize for people who like trick questions. Real-world scenarios beat puzzles. Update screening questions every role. Same questions = same applicant pool over time. Skill file: voice-skill-template

## 3. Outbound Talent Sourcing

### Tip 3.1 — LinkedIn sourcing with ICP filters

**What it does:** Same playbook as outbound sales sourcing, but for talent. Your agent scrapes LinkedIn (or sources from a recruiting tool) for candidates matching specific filters — role, seniority, geo, current company, skills — enriches each one, and drafts a personalized first-touch message in your voice. Why it wins: The best candidates aren't on job boards. They're working somewhere. Outbound sourcing reaches them. Doing it manually is too slow to compete; doing it with an agent puts you on parity with VC-backed recruiters. Tools: A LinkedIn scraper (or a paid sourcing tool API — there are talent-side equivalents to Apollo), enrichment via company web/site, your outbound voice skill, your email or LinkedIn DM rail. How to wire it:

## 1. Define the ICP per role: title patterns, seniority, geo, current/past companies, must-have

skills. 2. Agent pulls a starting list (50-200 candidates). 3. Enriches each: LinkedIn profile detail, recent posts or activity, current role context. 4. Scores: fit (1-5) against role rubric, signal (1-5) of "would they actually move." 5. For top scores, drafts a short personalized outbound: references a specific thing about their work, names the role with one line on why you think they'd be a fit, asks if open to a 15-min chat. 6. Send cadence: 5-10/day so deliverability and LinkedIn limits don't get you flagged. Example prompt to your agent: For role [slug] with ICP at talent-icp/<slug>.json : source 100 candidates from LinkedIn using the burner-account scraper. Enrich each with profile detail and recent activity. Score fit (1-5) and movability (1-5). For candidates ≥ 4 on both, draft a personalized first-touch in my voice: reference one specific thing from their work, name the role with one line on why I think they'd fit, ask for a 15-min chat. Stage drafts. Send max 8/day across email and LinkedIn DM (split by where you can find a verified address). Watch out for: LinkedIn limits aggressive outreach. Stay way under the daily message cap. Burner accounts get nuked. Movability is a guess. Don't waste outreach on someone who just started at a Series C unicorn 3 months ago. Don't copy-paste references — the candidate will know. The "specific thing" has to be specific. Skill file: linkedin-scraper, outreach-drafter, pre-call-research

### Tip 3.2 — Offshore VA hiring lane (specialized)

**What it does:** Distinct lane for the offshore admin/CS/research VA hire. The agent runs on job-board sources favored by offshore talent (the big VA boards, niche country-specific ones), filters for English level, time-zone compatibility, and category match, and pulls the long-list down to a manageable 10-15 strong candidates with full work-sample tests attached. Why it wins: Offshore VA hiring has a 100:1 noise ratio on the major boards. Without an agent doing the first cut, you're spending 20+ hours per hire. With one, it's 90 minutes.

**Tools:** The major VA job boards' application destinations, an English-level test, a paid worksample assignment. How to wire it: 1. Post a role with a clear paid work-sample test attached (e.g., "research these 20 prospects and put them in this template — $50 for the assignment"). 2. Agent reviews applications: filters obvious bots, scores English from the cover letter, scores experience match. 3. Auto-sends the work-sample assignment with payment instructions to top 20-30. 4. Grades submissions against a rubric (Tip 4.1) when they come back. 5. Final list of 5-10 goes to you for an interview. Example prompt to your agent: For role [VA slug] posted on [board], every day at 8am: pull new applications. Filter out obvious bots and copy-paste applications. Score English from the cover letter (1-5) and experience match (1-5). For applicants ≥ 4 on both, auto-send the paid work-sample assignment with payment instructions. Track submission status. On submission, grade against the rubric (Tip 4.1). Surface the final 5-10 for interviews. Don't auto-reject — Pass applicants get a polite rejection email after a 48h grace. Watch out for: Paid work-samples filter way better than unpaid. Pay $30-100 and the signal triples. Time-zone overlap matters. A 4am-your-time hire works for niche tasks, not for tasks that need real-time coordination. Cultural fit takes the longest to evaluate. Don't shortcut the final interview. Skill file: outreach-drafter, scope-analyzer Field note: Some operators have published their own playbook on which specific countries, boards, and screening filters work best for offshore admin/CS hires. Worth bookmarking one or two such playbooks from operators you respect, but the agent flow above is niche-neutral — apply your own country/board preferences in the ICP config.

## 4. Take-Home Assignment Grader

### Tip 4.1 — Take-home grader against rubric

**What it does:** Candidates submit a take-home. Your agent runs it (if it's code, executes it; if it's writing, reads it; if it's research, fact-checks it), scores against the rubric, and produces a graded report: what they did well, what they missed, where they were average.

**Why it wins:** Grading take-homes is the most under-resourced step in hiring. Most teams either don't read them carefully or have a junior person eyeball them. An agent that grades against a rubric consistently, with citations, gets you fairer hiring decisions and saves 1-2 hours per candidate. Tools: Your take-home submission destination, a sandboxed code runner if applicable, the rubric. How to wire it: 1. Per take-home, define a rubric in take-homes/<role>/rubric.md : criteria (5-10), weighting, what "5/5" looks like vs "3/5" vs "1/5." 2. When a submission lands, agent runs the assignment (if code, in a sandbox; if writing, reads it cold; if research, spot-checks 3-5 claims). 3. Scores each criterion with a citation to the submission ("said 'X' on line Y, which matches/misses criterion Z because…"). 4. Produces a graded report and a final score. 5. Optionally compares to the top 3 prior submissions for calibration. Example prompt to your agent: Watch take-homes/<role>/submissions/ . For each new submission: run it according to the rubric (execute code in sandbox / read writing / verify research). Score each rubric criterion with a citation to the exact part of the submission that justifies the score. Output: a graded report per submission with overall score, plus a side-by-side comparison to the top 3 prior submissions for the same role. Send me a daily summary of new grades. Watch out for: Don't auto-eliminate on the agent's score. Use it as a strong prior, not a final word. Sandboxed code execution has security implications. Run untrusted code in throwaway containers with no network access. Calibrate the grader by feeding it your past hires' submissions and seeing if it ranks them in the order they performed on the job. Skill file: security-audit (sandboxed-execution pattern), scope-analyzer

## 5. Reference Calls

### Tip 5.1 — Reference call summarizer

**What it does:** You do the reference call (or have someone on your team do it). The agent records, transcribes, and produces a structured summary: strengths confirmed, concerns raised, specific stories, the reference's body language read on tone (positive / neutral / lukewarm). Why it wins: Reference call notes are usually a half-page of bullets that don't get re-read. A structured summary that maps the reference's answers back to the rubric items is the single best artifact for the final hiring decision meeting. Tools: A meeting note-taker (Fathom, Otter, Granola, Read.ai), your role rubric, your candidate file. How to wire it: 1. Before the reference call, the agent loads the candidate file and the rubric, drafts a 6-8 question reference guide tailored to the specific candidate (gaps in their resume, things you want confirmed). 2. The reference call gets transcribed. 3. After the call, the agent produces a structured summary: per rubric item, what the reference said with a direct quote. 4. Sentiment read on the reference's overall tone, with caveats ("references are biased to positive"). 5. Highlights any red flag — moments of pause, vague answers, careful word choice. Example prompt to your agent: Before reference call for candidate [name]: load their file and the role rubric. Draft 6-8 tailored questions that probe rubric items and any resume gaps. Send me the question list before the call. After the call ends and the transcript drops: produce a structured summary — per rubric item, what the reference said with direct quote. Sentiment read with caveats. Flag any moments of hesitation, vagueness, or careful phrasing. Save to candidates/<slug>/reference-<name>.md . Watch out for: References are biased. Cross-check claims across two references. "She's smart" is meaningless. Force the agent to flag answers that lack specifics as "vague — needs follow-up." Backchannel references (someone the candidate didn't name) tell you 10x more. Hard to scale, worth the time on senior hires. Skill file: fathom-transcripts, meeting-prep, consultation-recap

## 6. Onboarding Doc Generation

### Tip 6.1 — Day-one onboarding doc per hire

**What it does:** When you make an offer, your agent generates the new hire's day-one onboarding doc: tools they need access to, who their first 1:1s are with, the 30/60/90-day plan you wrote in the JD turned into actual milestones, links to relevant internal docs. Why it wins: A new hire's first impression is set in the first 48 hours. A clean onboarding doc waiting in their inbox on day one is the cheapest possible signal that you've got it together. Most teams send a Notion link the night before and call it done. Tools: Your role's JD (Tip 1.1), your team's tool inventory, your team's org chart. How to wire it: 1. On offer_accepted webhook from your ATS: agent kicks off doc generation. 2. Pulls the role JD's 30/60/90 plan, turns each into concrete week-by-week milestones. 3. Looks up the tools list for the role, generates an access-request list for the IT/ops person. 4. Identifies 3-5 first-week 1:1s based on the org chart. 5. Drafts a welcome email from you, with the onboarding doc attached. 6. Stages everything for your approval, then fires 5 days before start date. Example prompt to your agent: On offer_accepted for role [slug]: generate the new hire's onboarding doc. Pull the JD's 30/60/90 plan and turn into week-by-week milestones with specific deliverables. List required tools and accesses from tool-inventory/<role>.json . Identify 3-5 week-one 1:1 candidates from the org chart. Draft welcome email from me. Stage everything in onboarding/<hire-name>/ . Five days before start date, send the welcome email and ping ops to provision the access list. Watch out for: Tool access on day one is the highest-friction bottleneck. Hit ops 7 days in advance, not 1. The 30/60/90 plan is only useful if it gets reviewed at day 30. Schedule that review on the new hire's first day. Don't dump 200 links on a new hire. Curate the top 5 they actually need week one. Skill file: client-onboarding (same pattern, different population), project-planning, emailfollowups

### How it all stacks

Hiring is a funnel like sales. JD → applicants → resume triage → take-home → interview →
reference → offer → onboard. The agent runs the early funnel almost entirely; you spend
your time on the final stages where judgment is irreplaceable.
Install order:
1. JD writer (Tip 1.1) and screening questions (Tip 2.2). Top-of-funnel quality controls. Cheap, immediate impact.
2. Resume triage (Tip 2.1). The single biggest time-saver in the section. Install this for the very next role you post.
3. Take-home grader (Tip 4.1). Install once you've established what a strong take-home submission looks like (need 5-10 past submissions to calibrate).
4. Onboarding doc generator (Tip 6.1). Install before your next offer.
5. Reference call summarizer (Tip 5.1). Install once you're hiring at a cadence where reference calls happen monthly+.
6. Outbound talent sourcing (Tip 3.1). Install when inbound stops surfacing the candidates you actually want.
7. Offshore VA lane (Tip 3.2). Spin up only when you're specifically hiring offshore admin/CS/research. The compounding asset across all of this is the role rubric library. Every role you fill produces a rubric that the agent learned from. Two years in, you're hiring out of a system that knows your bar — not your gut on the day.

### Founder Presence
