The Cookbook · 01
Marketing
The deepest section of the cookbook. Content factory, image-gen, ads, SEO, and outbound lead-gen via scanning.
Marketing is where agents earn back their cost the fastest. Every other channel — sales, ops, finance — has a ceiling. Marketing scales with how much you can produce, distribute, test, and learn. That's exactly what an agent does cheaply. This section is the deepest in the cookbook on purpose. Read it in order. Each tip is a recipe — tools, prompts, cadence, and the skill file you can lift straight into your agent. A few principles up front: Build skills, not one-off prompts. Every recipe below ends with "save this as a skill." That is the whole point. A prompt you type once helps you once. A skill loads every time the agent needs it. Cron the boring parts. If the agent has to wait for you to tell it to scrape, it won't get scraped. Schedule it. Approval gates on anything public. Drafting is automated. Posting is not, until the voice is dialed in. Train on competitors for ideas and visuals. Train on yourself for voice.* This split matters.
1. The Content Factory Pipeline (the "dark factory")
The thesis: most of what kills content output is the cold start. Empty doc, no idea, no hook, no image. The dark factory removes the cold start. Every morning, your agent has already harvested fresh ideas from the people winning in your niche, scored them, drafted them in your voice, and put images next to them. You wake up and approve. This is the section that 10x's output. Everything else is incremental. This is not.
Tip 1.1 — Competitor scraping with yt-dlp + Whisper + YouTube Analytics API
What it does: Pulls down your top competitors' YouTube videos automatically, transcribes them with Whisper (the free open-source model is fine for most cases), and ranks them by performance via the YouTube Analytics API. You wake up to a list of every banger your competitors dropped, with full transcripts, ready to remix. Why it wins: Your competitors do the audience research, the topic validation, and the hook testing for you. Their viral videos are your free A/B test. Most operators "watch their competitors" by occasionally browsing the channel. That is not a system. This is.
Tools: yt-dlp, Whisper, YouTube Data API v3, YouTube Analytics API, a Google Sheet for the running log. How to wire it: 1. Sign up for a YouTube Data API project (console.cloud.google.com) and a free Groq account for transcription. Then hand both to your agent: "I have a YouTube Data API project and a Groq account — do everything you need to use them. Install yt-dlp and any other tooling you require, store the keys safely, and ship me a daily competitor scraper." Free quota covers 5-10 competitors easily. 2. Give your agent a list of competitor channels — handle, channel ID, what you watch them for. 3. Tell it to write a script that: - Pulls the latest N videos per channel via youtube.search.list (or playlistItems.list on the uploads playlist — cheaper on quota). - Pulls stats via youtube.videos.list ( part=statistics,snippet,contentDetails ). - Scores each video against the channel's median (views per day since upload is the cleanest metric). - Writes title, channel, views, view velocity, hook (first 200 words), full transcript, and URL into a Google Sheet. 1. Cron it: 0 6 * * * (daily 6am, local). One agent run finds, transcribes, and logs everything. Example prompt to your agent: Build me a daily YouTube competitor scraper. Inputs are in competitors.json — a list of channels with handle, channel ID, and notes on what they're known for. Every day at 6am, pull their last 30 videos, score each one by views-per-day-since-upload against that channel's median, download audio for any video above 1.5x median, transcribe with Whisper, and append a row to my "Competitor Bangers" sheet with title, channel, score, hook (first 200 words of transcript), full transcript, and URL. Dedupe by video ID. Save the whole thing as a skill called competitor-yt-scraper. Watch out for: YouTube API quota — search.list is 100 units per call, videos.list is 1. Use playlistItems.list on the channel uploads playlist when you can. Whisper transcription quality degrades on music-heavy intros. Strip the first 5 seconds if you see junk. Don't republish their content. You're harvesting ideas, hooks, and structure, not copying.
Skill file: _skills-anonymized/youtube-pipeline/ (contains the full YT API patterns) plus a custom competitor-yt-scraper skill you build per niche.
Tip 1.2 — Daily cron: best-of OR every-drop mode
What it does: Two flavors of the scraper above. "Best-of" wakes the agent only when a competitor publishes something that outperforms their median. "Every-drop" logs everything every single one of them posts so you can scan the trend line. Why it wins: Most people scrape too much (noise) or too little (miss the wins). Splitting the cron by mode lets you stay close to specific competitors (every-drop on your 2-3 main rivals) without drowning in mid-tier creators (best-of on the next 10). Tools: Same as 1.1, plus a config file with a mode field per competitor. How to wire it: 1. Add a "mode": "every_drop" or "mode": "best_of" field per competitor in your config. 2. In best-of mode, the scraper only triggers Whisper transcription and the notification if view_velocity > median * threshold (1.5x is a good start). 3. In every-drop mode, transcribe everything but tag it "priority": "low" if it's midperforming. 4. The agent sends you a morning digest: top 5 of the day across all competitors, sorted by score, each with a one-line "what they're testing here" note. 5. Cron: 0 6 * * * — daily. Add a 0 12 * * * for an afternoon scan during high-output days. Example prompt to your agent: Every morning at 6am, run my competitor-yt-scraper. Then look at everything new from the last 24 hours. Pick the top 5 by view velocity. For each one, give me: the title, the channel, the score, the hook (first 30 seconds of transcript), and your one-sentence read on what they're doing differently. Send it to me on Telegram. Don't include videos I've already seen — track them in seen-videos.json . Watch out for: View velocity stabilizes around day 3. A video that's 4 hours old will always look like a banger; gate on age > 24h before scoring. Channels with one or two huge outliers will pull the median up so much that real bangers look mid. Use percentile-of-channel instead of multiple-of-median if a channel is uneven.
Skill file: Same as 1.1 — the mode flag is a parameter, not a separate skill.
Tip 1.3 — Script-writing skill from competitor transcripts
What it does: Your agent reads the transcripts of your competitors' best-performing videos and saves a skill that captures how they write scripts — structure, hook patterns, pacing, payoff moves. Then you point at any topic and it drafts a script in that structure. Why it wins: You don't need to copy what they say. You want to copy how they keep attention. A 12-minute video that holds 60% retention has structural moves you can lift. The agent extracts those moves and codifies them. Tools: Whatever you used in 1.1 (transcripts already in your sheet), plus your agent's skill creator. How to wire it: 1. Pick 10-20 of the best-performing transcripts in your niche from 1.1. 2. Feed them to the agent with this instruction: "Read these. Identify the structural patterns — hook style, what happens in the first 30 seconds, when they put the payoff, how they handle drop-offs at the 2-minute and 5-minute marks, how they close. Don't summarize the content. Describe the shape." 3. Have it save that pattern analysis as a script-writing-niche skill. 4. Then point at a new topic: "Use the script-writing-niche skill to draft a 10-minute video script on [topic]. Use the hook style from videos 3, 7, and 12 in the corpus." 5. Iterate on the skill weekly — every time a new banger drops, feed it back in. Example prompt to your agent: Read the top 20 video transcripts in my Competitor Bangers sheet (filter: score > 2.0, niche = AI agents). Don't summarize them. Identify the patterns in: (1) the first 30 seconds — what hook structures recur, (2) where they put the first big payoff, (3) how they handle the 2-minute and 5-minute retention cliffs, (4) how they close. Write all of that up as a skill called niche-script-structure . Cite specific videos for each pattern. Watch out for: The skill will get bloated if you let it pull patterns from every transcript ever. Cap it at 2030 top performers and rotate. Hooks age fast. Re-train this skill every 4-6 weeks. Don't fuse this with your voice skill (Tip 1.4). Structure and voice are different. Mixing them blunts both.
Skill file: _skills-anonymized/script-polish/ is the closest template. Build your niche-specific version on top.
Tip 1.4 — Voice skill per platform from competitor posts (and your own)
What it does: A skill per platform that captures the voice that wins on that platform. The trick: train on competitors for the platform's voice norms, but train on yourself for your specific voice. Two layers. Why it wins: LinkedIn voice ≠ Twitter voice ≠ Skool voice ≠ YouTube script voice. Most operators write everything the same and wonder why one platform pops and the other dies. Per-platform voice skills are the cheapest, most impactful fix in this whole section. Tools: A scraper per platform (LinkedIn via Scrapling+Playwright, Twitter via Nitter or paid API, Reddit via the public JSON API), your agent's skill creator. How to wire it: 1. Platform norms first. Tell your agent to identify 5-10 creators who crush on that platform in your niche and scrape their top 50 posts each. 2. Tell the agent: "Don't copy these. Identify what makes them legible as platform-native. What hooks work. What length wins. What formatting (line breaks, bullets, emoji density). What CTAs land." 3. Save that as
Watch out for: Don't merge "norms" and "mine" into one skill. Keeping them separate lets you test platforms without retraining your whole voice.
Tip 1.5 — RSS feeds from niche news sites
What it does: A real-time idea pipeline. Your agent watches every major news source in your niche via RSS, summarizes new articles, scores them by relevance and likely engagement, and feeds the survivors into the content draft queue. Why it wins: Trending news posts on LinkedIn outperform evergreen posts roughly 3-5x because the algorithm pushes timely content. Most operators miss the window because they don't see the news for two days. The agent sees it in five minutes. Tools: A free RSS reader you can hit programmatically (Feedbin, FreshRSS self-hosted, or just feedparser in Python), a Google Sheet, your agent. How to wire it: 1. List 10-20 RSS feeds in your niche. Anthropic blog, OpenAI blog, Google AI blog, niche news sites, key Substacks. Include big competitor blogs. 2. Cron a fetcher every 30 minutes that pulls new entries, dedupes by URL, and writes them to a sheet. 3. Have the agent score each one: relevance to your audience (1-5), timeliness (1-5), and whether you have a unique take (1-5). 4. Filter to score >= 12. Those become candidate posts. 5. Run them through your platform voice skills (Tip 1.4) to draft posts. 6. Stage drafts in your content drafts folder by 8am. You approve over coffee. Example prompt to your agent: Set up an RSS watcher. Feeds are in rss-feeds.json . Every 30 minutes, fetch new entries, dedupe by URL, write to my "RSS Inbox" sheet. Every morning at 7am, score everything new from the last 24 hours: relevance to my audience (1-5), timeliness (1-5), do I have a unique take (1-5). For anything scoring 12+, draft a LinkedIn post and a
tweet using my voice skills. Stage drafts at /content-drafts/YYYY-MM-DD.json . Tell me what's queued before I open Telegram. Watch out for: Most RSS-driven content sounds AI-generated because the agent over-summarizes the article. Force a hook + your take + a question. The article is the spark, not the post. Some sites block automated RSS — use a real reader as a proxy. News cycles are short. If the article is more than 36 hours old, the post will land flat. Cron tight. Skill file: _skills-anonymized/linkedin-post-writer/ already runs this exact loop in production. Lift it.
Tip 1.6 — Synthesis: agent reads all three sources and pitches you the best ideas
What it does: The capstone. Every morning, your agent reads (a) yesterday's competitor video bangers, (b) the RSS feed scores, (c) the conversations and DMs you had yesterday — and pitches you the 3 best content ideas across all sources with a draft hook for each. Why it wins: You stop choosing. The agent has more information than you do — it saw everything overnight, you didn't. It cross-references the three sources to find ideas where there's actual signal (something a competitor just covered + a news story breaking + a DM that asked about it = post that prints). Tools: Everything from 1.1-1.5, plus your conversation log (your messaging app exports or your CRM). How to wire it: 1. Make sure all three pipelines are landing data in known places: competitor bangers sheet, RSS inbox sheet, conversation log. 2. Build a morning-pitch skill that: - Reads all three. - Looks for overlap (a topic appearing in 2+ sources is a strong signal). - Scores each idea on: novelty (have I posted this recently?), differentiation (do I have a take that's not in the source?), urgency (does this expire if I don't post today?). - Picks the top 3. - Drafts a hook + first paragraph for each. 1. Send the 3 pitches to you on Telegram or Slack at 7am with a "yes/no/another" prompt. 2. On "yes," the agent runs the relevant platform voice skill and produces the full draft.
- On "no," it picks the next-best. 4. On "another," it pitches three more. Example prompt to your agent: Build a morning-pitch skill. Every morning at 7am: read yesterday's top 5 competitor video bangers, top 5 RSS articles by score, and my conversations from the last 48 hours. Cross-reference. Find 3 ideas where there's overlap or where one source uniquely calls out something my audience cares about. For each, write the strongest hook you can in my voice plus the first paragraph. Send to me on Telegram with "yes/no/another" options. If I say yes, run the full platform voice skill and stage the post. Watch out for: The agent will pitch safe ideas if you let it. Force it: "If your top pick wouldn't make me uncomfortable to post, pick something else." Cross-source signal is a leading indicator, not a guarantee. Track which pitches you accept and which perform. Feed it back monthly. Don't let the morning pitch become 10 ideas. Three. Always three. Skill file: _skills-anonymized/content-engine/ is the closest analog. Spin up morning-pitch on top.
2. The Image-Gen Trick
LinkedIn and Twitter both reward image posts heavily — sometimes 2-3x text-only reach. Most operators either skip images (lazy) or use generic stock (worse than no image, makes you look like an MBA). The fix is to generate images per post, and to use a baseline corpus of already-proven competitor images as the seed.
Tip 2.1 — Auto-generate images from scripts via ChatGPT API (or fal.ai)
What it does: Every drafted post gets an image generated alongside it. The agent reads the post, writes an image prompt that captures the post's hook, generates the image via API, and stages it next to the draft. Why it wins: You don't have to think about images. You scroll the morning drafts and they all come with visuals. Approval friction drops to zero. Tools: OpenAI gpt-image-2 (or DALL-E 3, or fal.ai for cheaper alternatives), a folder in your workspace for staged images. How to wire it:
- After the agent drafts a post via Tip 1.4 or 1.6, append a step: "Read this post. Identify the visual metaphor or scroll-stopping concept. Write an image prompt. Generate the image." 2. Set a default style guide for your image prompts — color palette, lighting, composition rules — so all your images look like yours and not random AI slop. Save this as an image-style-guide snippet that gets injected into every prompt. 3. Save images to content-drafts/YYYY-MM-DD/images/ . Reference them in the draft JSON. 4. When you approve a post, the image goes with it. Example prompt to your agent: After you finish drafting today's LinkedIn posts, for each post: read it again, identify the strongest visual concept in the post (not literal — metaphorical, scroll-stopping), write a one-paragraph image prompt that includes my style guide from image-styleguide.md , call gpt-image-1 (1024x1024), save to content-drafts/YYYY-MMDD/images/linkedin-N.png . Attach the path to the draft JSON. Watch out for: Without a style guide, every image will be a different aesthetic and your feed will look schizophrenic. Lock the style guide early. Don't literalize the post. "AI agents taking over jobs" → generated as a robot in a suit = terrible. Force metaphor or contrast. Cost: gpt-image-1 is
$0.04 per image. Two posts a day = $25/month. fal.ai is cheaper ($0.01) for similar quality. Skill file: _skills-anonymized/image-gen/
Tip 2.2 — KILLER MOVE: the competitor LinkedIn picture remix folder
What it does: Build a folder of the highest-performing image posts in your niche. When the agent generates an image, it feeds one of those proven images into the image gen as a reference, with a prompt to remix it in your style for your post. The base image is already validated to stop scrolls. You're just retargeting. Why it wins: Generating from a blank slate, the model produces median output. Generating from an image that already won, with a remix instruction, gives you the algorithmic advantage of the proven format with your own spin. This is the single biggest unlock in image gen and almost nobody does it. your line, verbatim: "Giving ChatGPT image gen a picture from a post that's already performed well and telling it to tailor it for you is going to be way better than just saying
'make an image for this post.'" Tools: LinkedIn scraper (Scrapling + Playwright), an image download routine, ChatGPT image gen with image input (or fal.ai's image-to-image endpoint), your style guide. How to wire it: 1. Pick 10-20 LinkedIn creators in your niche whose image posts consistently rip. 2. Scrape their last 100 posts each. Filter to image posts with above-median engagement. 3. Download every image. Store in /content-references/linkedinbangers/
Skill file: Custom — build on _skills-anonymized/image-gen/ and _skillsanonymized/linkedin-scraper/ together.
- Ads
A note up front: most of this section is synthesis. The author flagged ads as an area where they haven't run enough campaigns to have lived experience like they do on content. So everything below is general best-practice plus what's actually proven in agent-driven workflows, not "I did this and it 10x'd." Specific claims that are operator-tested are flagged. Everything else: try it, measure, keep what works. Open questions are tracked in _openquestions.md . The framework: ads burn money fast, but the creative testing loop and the kill/double loop are where money is actually made or lost. Both are agent-shaped problems. The agent doesn't get tired of writing the 47th variant and doesn't get attached to the underperforming ad.
Tip 3.1 — Creative testing automation (Meta/TikTok/Google)
What it does: Your agent generates 20-50 ad variants per campaign — different hooks, different headlines, different visuals — uploads them as a structured experiment, monitors performance every 4-6 hours, kills underperformers, and double-downs on winners. Why it wins: Manual creative testing caps at maybe 5 variants per week per human. With an agent, the cap is your ad spend, not your time. The platforms reward accounts that feed them creative variety; the algorithm optimizes faster. Tools: Meta Marketing API, Google Ads API, TikTok Marketing API, your image gen and copy stack from sections 1 and 2. How to wire it: 1. Get API access (Meta requires app review; TikTok requires partner approval; Google is the easiest of the three). 2. Define the campaign structure: audience, objective, budget cap. Lock these. 3. Have the agent generate the creative matrix: 4 hooks × 3 visuals × 2 CTAs = 24 variants. 4. Upload as a single Advantage+ campaign (Meta) or Performance Max (Google). Let the algorithm allocate spend. 5. Every 4 hours, cron the agent: pull the last 24h of data, identify variants with statistically meaningful results (use a min-spend gate, e.g. $30 spent before any judgment), pause anything CPL > 2x median, generate 3 new variants in the style of any variant outperforming median by 1.5x+. 6. Daily summary: spend, CPL, top creator, kills, new variants spun up.
Example prompt to your agent:
For campaign [ID], every 4 hours: pull the last 24h of variant-level performance via the
Meta Marketing API. For any variant that's spent $30+ and has CPL > 2x median, pause
it. For any variant outperforming median CPL by 1.5x+, spin up 3 new variants that copy
its hook structure but vary the visual or headline. Maintain at least 15 active variants at
all times. Cap daily variant spawns at 10. Send me a daily 8am summary: spend, total
leads, CPL, top performer, kills, new variants live.
Watch out for:
Min-spend gate is critical. Don't kill a variant after $5; you're killing on noise.
Meta's algorithm hates frequent edits to active campaigns. Let new variants run 48
hours before judging.
Account flags: don't let the agent spawn 100 variants in an hour. Rate-limit it.
Every API has a creative review queue. The agent should flag rejections to you and not
retry until it knows why.
Skill file: Pattern only — no skill in this version. Tell your agent to write one from this recipe,
using the platforms' SDKs (Meta Marketing API, Google Ads API, TikTok Ads API).
Tip 3.2 — Audience research via competitor ad libraries
What it does: Meta and TikTok publish all active ads. Your agent scrapes your competitors' ad libraries weekly, classifies which creatives are running long (= probably working), and tells you what offers, hooks, and audiences your competition is paying to test. Why it wins: This is free competitive intelligence on a level that doesn't exist in any other channel. If a competitor has been running the same ad for 6 weeks, that ad is profitable. Steal the structure. Tools: Meta Ad Library ( https://www.facebook.com/ads/library/ ), TikTok Creative Center, Google Ads Transparency Center, agent-browser for scraping, your competitor list. How to wire it: 1. Identify 5-10 competitors. Pull their Meta Ad Library page weekly. 2. Have the agent scrape: every active ad, first-seen date, current run duration, ad creative, ad copy. 3. Flag ads running 30+ days as "winners" — they wouldn't still be running if they weren't profitable. 4. Categorize by hook type, offer type, audience signal (the ad copy usually telegraphs the targeting).
5. Weekly digest to you: "Competitor X has 4 long-running ads, all using [hook pattern], all
promoting [offer]. Worth testing a variant in your niche?" Example prompt to your agent: Every Monday at 8am, use agent-browser to scrape the Meta Ad Library page for each competitor in ad-competitors.json . Capture every active ad: text, image/video URL, first-seen date, run duration. Save to /ad-intel/YYYY-MM-DD/ . Flag any ad running 30+ days as a winner. Send me a digest: top 5 long-runners across all competitors, my one-sentence read on why each is working, and the hook/offer/audience signals. Suggest 3 angles I could test. Watch out for: Meta Ad Library shows all a brand's ads including non-acquisition ones (recruiting, PR, etc). Filter to lead-gen / conversion intent. "Long-running" is a heuristic, not gospel. A 6-week ad with low spend isn't proven. The library doesn't show spend, so triangulate with the ad copy quality. Don't copy creatives. Copy structure. Same legal logic as Tip 2.2. Skill file: Composite — builds on agent-browser and _auto-competitor-intel. Pattern only — no standalone skill in this version. Tell your agent to wire one from these.
Tip 3.3 — Copy at scale, scored before launch
What it does: Your agent generates 30+ ad copy variants from a single brief, scores them against historical winners (your own and your competitors'), and only the top 10 ever make it to the upload step. Why it wins: Most ad accounts launch 3-5 copies because a human wrote them. Half are obviously weak. With an agent generating 30 and scoring against a real corpus, the floor on launched copy goes way up. Tools: Your copywriting skill (specifically tuned for your category), a scoring rubric, optionally a frontier model as the judge. How to wire it: 1. Build an ad-copy-judge skill: feed it 50-100 winning ads from your category (from Tip 3.2 plus your own past winners). Have it identify hook patterns, length distributions, CTA styles. Save as a scoring rubric. 2. When briefing a new campaign, the agent generates 30 variants using a separate adcopywriter skill (different from your organic content skills — ad copy has its own conventions).
- The judge scores all 30 against the rubric: hook strength, clarity, CTA, offer specificity. 4. Top 10 promoted to the upload queue. Bottom 20 logged for the agent to learn from. Example prompt to your agent: For campaign [name], generate 30 ad copy variants using the ad-copywriter skill. Constraints: 125 char limit on primary text, 27 char limit on headline, must include offer "[offer]" and CTA "[CTA]". Then run all 30 through the ad-copy-judge skill. Output a ranked list with scores. I'll pick the top 10 to launch. Watch out for: Don't use your organic LinkedIn voice for ads. They're different jobs. Ad copy is direct response; LinkedIn is brand. The judge will get good at scoring its own outputs. Periodically refresh the corpus with new winners or it'll converge on a local optimum. Test the judge by feeding it known winners and known losers and seeing if it ranks them right. If it doesn't, the rubric is wrong. Skill file: Pattern only — no skill in this version. Tell your agent to write a pair (
adcopywriter+ad-copy-judge) per category from this recipe.
Tip 3.4 — Kill/double rules on cron
What it does: A hard-coded ruleset that the agent runs every few hours: if CPL > X for Y days, kill. If ROAS > X for Y days, double budget. No emotion, no "let me give it one more day." Why it wins: Most accounts bleed money on the third "let me give it one more day." The agent doesn't have a third day. It has rules and it executes them. Tools: Meta/Google/TikTok APIs, a config file with the rules, cron. How to wire it: 1. Define your rules in ad-rules.json . Example: - Kill: spend > $50 AND no conversion AND CPC > 2x account median. - Kill: CPL > 1.5x target for 3 days running. - Double: ROAS > 2x target for 3 days AND daily spend not at budget cap. - Pause + alert: any campaign at 95% of daily budget cap by noon. 1. Cron every 4 hours. Agent pulls data, applies rules, takes actions, logs every action. 2. Daily summary: what got killed, what got doubled, what's flagged.
3. Hard limit on doubling — never let the agent more than 2x a budget per 24h period
without explicit approval. Example prompt to your agent: Every 4 hours, run ad-rules.json against every active campaign. Apply: kill rules immediately, double rules pending my confirmation (send me a Telegram message with a yes/no), pause+alert rules immediately with a notification. Log every action to /adactions/YYYY-MM-DD.json . Daily summary at 9am. Watch out for: Days-of-week effect is real on most accounts. Don't kill on a Monday if the campaign is fine Tue-Sun. Budget doubling can break delivery on Meta. Increase 20% per day, not 100%. The agent should never unpause without human approval. Pausing is reversible by hand; unpausing into a dead campaign is not. Skill file: Cron pattern modeled on pipeline-closer. Pattern only — no standalone skill in this version. Tell your agent to write one from this recipe.
Tip 3.5 — UGC sourcing pipeline
What it does: Your agent monitors review sites, your DMs, your CRM, and your support tickets for content that reads as genuine user testimonial. It flags candidates, drafts an outreach to the person asking permission to use the quote in an ad, and feeds approved quotes into the ad copy and creative pipeline. Why it wins: UGC ads outperform polished branded ads by 30-50% in most categories. The bottleneck is sourcing. The agent removes the sourcing tax. Tools: Review monitoring (G2, Capterra, App Store, Google reviews), your CRM (Tip 3.x uses Attio's API), DM monitoring, your outreach drafter. How to wire it: 1. Connect the agent to every place customers say things about you: reviews, support, DMs, post-call surveys. 2. Daily scan: pull anything new, classify as positive/neutral/negative, score positives by "would this be a great ad quote" (1-5). 3. For 4-5 scored ones: draft a permission-to-use message in your voice, send to the customer (with your approval). 4. On permission granted: feed the quote into Tip 3.3's copywriter as a hook seed. 5. Pair with selfie-style photo or video if the customer is willing.
Example prompt to your agent:
Every day at 5pm: scan G2, Capterra, recent Cal.com bookings post-call surveys, my
last 48h of DMs, and my support inbox for any new positive customer feedback. Score
each by "ad quote potential" (1-5). For anything 4+, draft a permission-to-use message
in my voice. Show me before sending. On permission granted, log the quote to /ugcquotes/approved/ for the ad copywriter to use.
Watch out for:
Always get explicit written permission. Some platforms (App Store) auto-publish reviews
but that doesn't mean you can put the reviewer's name on a Meta ad.
Don't doctor quotes. Trim for length, never paraphrase.
The agent will overscore at first. Calibrate it by feeding it 10 known-great quotes and 10
mid quotes and tuning the rubric.
Skill file: Builds on `outreach-drafter`. Pattern only — no standalone skill in this version.
Tell your agent to write one from this recipe.
Tip 3.6 — Landing page testing at agent speed
What it does: Every ad variant points to a landing page variant. The agent generates landing page variants (hero, copy, social proof order, CTA wording), deploys them, measures CVR per variant, and shuts down losers same as the ad creative loop. Why it wins: CVR doubling = CPL halving. Most accounts test one landing page. The agent tests 5. Tools: Your site framework (Next.js + Vercel is the proven combo here), feature flags or path-based variant routing, GA4 events. How to wire it: 1. Build the page once with a config-driven variant system (variant name + JSON config controls hero copy, image, social proof order, CTA). 2. Define 5 variants per campaign. 3. The agent deploys each variant to a unique path, sets up GA4 events for the funnel, and reports CVR. 4. Same kill/double logic as 3.4. Min-traffic gate (300 visitors per variant) before any judgment. 5. Auto-promote the winner to default after stat sig. Example prompt to your agent:
For campaign [name], generate 5 landing page variants. Deploy each to /lp/[name]/v1 through v5. Configure GA4 funnel events. Pull CVR data every 6 hours. After each variant has 300+ visits, identify the winner with a chi-square test (p < 0.05). Promote the winner to /lp/[name]/ as the default. Kill the losers. Send me a summary daily. Watch out for: Don't test 5 things at once on the same page. One major variable per variant or you can't tell what won. Mobile vs desktop CVR can differ 3x. Always segment by device before declaring a winner. Vercel deploy time + GA4 latency = roughly 60-90 minutes from "deploy" to "I can read the data." Plan around it. Skill file: _skills-anonymized/netlify/ (or vercel ) plus _skills-anonymized/siteanalytics/ . Build a landing-page-tester skill on top.
- SEO
SEO is the channel where agents have the cleanest advantage. It's repetitive, data-heavy, rewards consistency, and the tools (Search Console, GA4, Ahrefs, Semrush) all have APIs. Anyone doing SEO without an agent is bringing a butter knife. The loop: research competitors, find the gaps, write the content, deploy it, measure, iterate. All five steps are agent-shaped.
Tip 4.1 — Competitor SEO research + copy strategy
What it does: Your agent picks your top 5 competing domains, pulls their ranking keywords from Ahrefs or Semrush, classifies the keyword landscape, finds the gaps your competitors haven't claimed, and proposes a content plan. Why it wins: You stop guessing what to write about. You're writing the things your competitors already proved have traffic, plus the things they missed. Tools: Ahrefs API (preferred for deep data) or Semrush API (cheaper, lighter), agentbrowser for SERP scraping where APIs don't reach, your agent. How to wire it: 1. Tell your agent which 5-10 competitor domains to track. 2. Have the agent pull each one's top 500 ranking keywords ( /site-explorer/organickeywords on Ahrefs, equivalent on Semrush). 3. Classify by intent (info / commercial / transactional) and by difficulty.
- Cross-tabulate: which keywords does every competitor rank for? (table stakes — you need these). Which does only one rank for? (opportunity). Which does none rank for but have search volume? (the gold). 5. Output a content plan with 30-50 page titles, target keyword for each, target intent, and a one-line angle. Example prompt to your agent: Run a competitor SEO gap analysis. Competitors are in seo-competitors.json . Pull each one's top 500 ranking keywords via Ahrefs API. Classify by intent and difficulty. Find: (1) the table stakes — keywords all of them rank for, where I should at least exist, (2) the opportunities — keywords only one ranks for that I could win, (3) the gold — keywords with 100+ search volume and KD < 30 that none of them rank for. Output a 30-50 page content plan as a Google Sheet: page title, target keyword, intent, difficulty, my angle. Watch out for: Ahrefs API is expensive (~$500/mo for the tier that allows decent volume). For most operators, run the gap analysis monthly, not weekly. Keyword difficulty is a heuristic. Validate by actually looking at the SERP — if it's all huge domains, KD lies. Don't blindly write 30 pages. Write 5, deploy, measure, iterate before going further. Skill file: Builds on
site-analytics+ a thin Ahrefs/Semrush API wrapper. Pattern only — no standalone skill in this version. Tell your agent to write one from this recipe.
Tip 4.2 — Ahrefs / Semrush API plug-in for the agent
What it does: A thin wrapper skill that knows every Ahrefs/Semrush endpoint your agent will commonly need: backlink check, keyword research, SERP overview, rank tracking, site audit pull. Why it wins: Without a wrapper, every SEO task is a 20-line API setup. With one, the agent says ahrefs.keywords(domain, country, limit) and gets clean JSON. Speed compounds. Tools: Ahrefs API v3, Semrush API, your agent's skill creator. How to wire it: 1. Get an API key. Store in your secrets manager. 2. For each common task, write a function-style skill entry:
- ahrefs.ranking_keywords(domain, country, limit, intent_filter) → returns DataFrame
- ahrefs.backlinks_new(domain, since_date) → returns DataFrame
- ahrefs.serp(query, country) → returns ranked list
- ahrefs.site_audit_summary(project_id) → returns issues
- Document inputs, outputs, and rate limits in the skill file.
- Every other SEO skill calls into this one. Don't duplicate API logic. Example prompt to your agent: Build a skill called ahrefs-api . For each of these tasks, expose a clean callable function: ranking_keywords(domain, country, limit), backlinks_new(domain, since_date), serp(query, country), site_audit_summary(project_id). Implement each one against the Ahrefs API v3. Document the rate limits in the skill file (Ahrefs is 1 req/sec). Have the skill handle retries and quota errors. From now on, no other skill should call Ahrefs directly — only via this one. Watch out for: Both Ahrefs and Semrush have credit-based pricing. Have the agent track credit spend per call so you don't burn the month's budget by Tuesday. API responses change rarely but they do change. Pin the response schema and assert on it. Skill file: Pattern only — no skill in this version. Tell your agent to write a thin
ahrefs-apiorsemrush-apiwrapper from this recipe.
Tip 4.3 — Site connection: post blogs, edit copy, technical SEO, metadata
What it does: The agent has push access to your site. It can publish blog posts, edit copy on existing pages, update metadata, fix schema, add internal links — all via the deployment pipeline. Why it wins: Most SEO recommendations die in the "send it to the dev team" graveyard. Cutting the dev team out (in a controlled way, with approval gates) means recommendations become changes the same day. Tools: Whatever your site is built on (Next.js + Netlify or Vercel is the proven path), GitHub, your deployment skill. How to wire it: 1. Site lives in a git repo. Agent has read/write access.
- Page content lives in MDX or a content directory the agent can edit safely. Avoid letting it touch component code unless you explicitly approve. 3. Each agent change is a branch + PR. You review and merge. (Or, for low-stakes changes like metadata, set up auto-merge with required CI passes.) 4. Deploy is automatic via Netlify/Vercel on merge to main. 5. The agent tracks what it changed in /site-changes/YYYY-MM-DD.json so the SEO loop (Tip 4.5) can correlate changes with ranking moves. Example prompt to your agent: You have write access to my site repo at [repo URL]. Workflow: for any content change, create a branch, make the edit, open a PR with a summary of what changed and why, and ping me. For metadata-only changes (title, description, og:image), auto-merge if CI passes. Never touch component code unless I explicitly tell you to. Log every change to /site-changes/YYYY-MM-DD.json with: file, change type, reason. Watch out for: Never let the agent push to main directly without a CI gate. Bad metadata can tank rankings. Keep a rollback path. Every deploy should be reversible in one command. Permissions: the GitHub token the agent uses should be scoped to that one repo. Not your whole account. Skill file:
netlify+seo-blog-template(production SEO blog automation — lift it).
Tip 4.4 — Search Console + GA4 integration
What it does: The agent has read access to every piece of analytics data on your site. It pulls daily, builds the report, and surfaces what changed. Why it wins: Most operators check GSC once a quarter. By then, the slow ranking decay has cost them 30% of traffic. Daily-with-anomaly-detection means you catch drops in 48 hours. Tools: Google Search Console API, GA4 Data API, Bing Webmaster Tools API (don't skip Bing — Copilot/Bing/Yandex still send real traffic), your agent. How to wire it: 1. OAuth Search Console + GA4 against your Google account. 2. Daily pull (cron 5am): - GSC: queries, pages, clicks, impressions, CTR, average position — last 7 days vs prior 7.
- GA4: sessions, conversions, bounce rate by landing page — last 7 days vs prior 7.
- Flag any page where impressions dropped 30%+ week-over-week.
- Flag any query where you moved out of the top 10.
- Flag any query where you moved into the top 10 (new opportunity to push).
- Daily morning report: 5 lines of "here's what changed."
- Weekly deeper analysis: traffic source mix, top-converting pages, queries with rising impressions but stagnant rank (= a page-level optimization opportunity). Example prompt to your agent: Every morning at 6am: pull GSC and GA4 data for my site. Compare last 7 days to prior 7 days. Flag anomalies: pages with 30%+ impression drops, queries that fell out of top 10, queries that entered top 10. Send me a 5-line morning briefing on Telegram. On Mondays, also send a weekly: traffic source mix, top 5 converting pages, queries with rising impressions but no rank improvement (those are page-optimization opportunities). Watch out for: GSC data is delayed 1-3 days. Don't trust "today's" data — always look at data from at least 2 days ago. GA4 sampling kicks in on high-traffic queries. Use the BigQuery export if you have it. Position changes inside the top 3 are noise. Position changes from 11 → 4 are the signal. Skill file: _skills-anonymized/site-analytics/ — production-grade and battle-tested.
Tip 4.5 — Full SEO plan + execution loop (with approval gates)
What it does: The capstone. Combine 4.1 through 4.4 into a closed loop: gap analysis → write content → deploy → measure → re-prioritize. Run the loop weekly. Approval gates on anything that goes live. Why it wins: Most "SEO strategies" are PDFs that nobody executes. The execution loop with the agent running 80% of the work and you approving 20% means content actually ships, metadata actually gets fixed, and traffic actually grows. Tools: Everything above. How to wire it: 1. Weekly Sunday plan (cron 6pm Sun): Agent re-runs gap analysis (Tip 4.1, cached unless data is stale), reads last week's GSC anomalies (Tip 4.4), and proposes: - 1-3 new pages to write
- 3-5 existing pages to optimize - 5-10 metadata fixes - Any technical issues from Ahrefs site audit 1. Monday morning approval: You review the plan over coffee. Approve / cut / add. 2. Mon-Fri execution: Agent drafts everything. Approval gate per page (one click). On approval: deploy via Tip 4.3. 3. Tracking: Every published change gets a published_at timestamp. Tip 4.4 monitors impact, attributes ranking moves to specific changes. 4. Don't go nuts on blogs. your note, verbatim: "Don't go too crazy with blog posts — approve each one that goes out (or don't, that works too sometimes)." Quality > volume. 1-2 great pages per week beats 5 mediocre ones. Example prompt to your agent: Build a weekly SEO loop. Sunday 6pm: rerun seo-gap-analysis (use cached data if < 30 days old), read last week's GSC anomalies, and propose next week's plan: 1-3 new pages, 3-5 page optimizations, 5-10 metadata fixes. Send me the plan Monday 7am. On approval, draft each item. Each page draft needs my approval before deploy. Metadata fixes auto-deploy via Tip 4.3 if CI passes. Track every change in /seochanges/YYYY-MM-DD.json with published_at . Every Friday, correlate changes from 30 days ago with rank movements and tell me what worked. Watch out for: The agent will want to ship volume. Slow it down. SEO compounds; you don't need 50 mid pages, you need 10 great ones. Approval fatigue is real. Make the approval message scannable — title, target keyword, 3-bullet outline, link to full draft. Should take 30 seconds to approve. The first 8 weeks will feel slow. SEO has a 3-month delay before signals stabilize. Trust the loop. Re-evaluate at week 12, not week 3. Auto-deploy of metadata is high-leverage but high-risk. Start with 1 site / 5 pages / manual review. Earn the auto-deploy permission. Skill file: Composite — wires together
site-analytics+seo-blog-template+netlify, plus the seo-gap-analysis and ahrefs-api patterns from Tips 4.1 and 4.2. The composite orchestrator is pattern-only — tell your agent to wire one from these.
5. Outbound Lead-Gen via Scanning
Most "lead gen" content covers ads, SEO, and inbound. This section is the other half: the agent goes out and finds leads by scanning the local business landscape for a specific gap, then hands the list to your sales loop. One tip here. The outreach half of this loop lives in Sales Section 5 — Tip 5.5 (site-mockup DM) — this section produces the list, that section runs the DM.
Tip 5.1 — No-website scanner for local businesses
What it does: Your agent sweeps every local business in a target geography (city, suburb, postcode), checks each one for a website, and outputs a clean list of businesses that don't have one. That list becomes the input for the sales-side outreach loop, where the agent researches each prospect, generates a custom site mockup image, and DMs them a "I already made you a site" pitch. Why it wins: "Businesses without a website" is the cleanest pre-qualified lead list in local services. They're not undecided shoppers — they have a structural gap and they know it. The cost of finding them manually (search Google Maps one at a time, click into each profile, check for a website link) is what kills the play. With an agent, the scan runs overnight across an entire metro area. Tools: Google Maps Places API (or agent-browser if you're avoiding Google's quota), a list of business categories to target, your CRM or a Google Sheet for the lead list, optionally Apify/SerpAPI as a fallback source. How to wire it: 1. Tell your agent your target geography — a city, a list of postcodes, or a bounding box. The tighter the geography, the higher the answer-rate on outreach later. 2. Tell your agent your target categories. "Restaurants" is too broad. "Independent cafés," "auto detailers," "med spas," "trades that quote on the phone" — that's where nowebsite is most common and value-per-customer is high enough to justify the work. 3. Have the agent query the Places API for every business in the geography × category. Pull place_id , name, category, phone, address, website field, social links. 4. Filter for website empty/missing. Many businesses link their Facebook page in the website slot — decide whether that counts as "has a site" (recommend: no, a Facebook page is still a sales opportunity). 5. Enrich each surviving row with: a quick SERP check (does the agent find a site for them via a name + city search?), a Facebook/Instagram presence check, and a one-line read on the business from any reviews available. 6. Output a deduped sheet: business name, category, address, phone, socials (if any), confidence score that they actually have no site (1-5), and a free-text note.
- Hand the list to the sales outreach loop (Sales Section 5 — Tip 5.5 (site-mockup DM)) which does the per-lead research, mockup generation, and outbound. Example prompt to your agent: Build a no-website-scanner skill. Input: a config file with target geography (city, postcodes, or bounding box) and target categories. For each category × postcode pair, query the Google Places API for all businesses. For each one, check the website field. If empty, missing, or pointing only at a social profile, run a SERP check ("[business name] [city]") to confirm no site exists elsewhere. Score confidence 1-5. Output to a sheet named "No-Site Leads — [city] — [date]" with columns: name, category, address, phone, socials, confidence, notes. Dedupe by place_id. Cap at 500 leads per run. Once the scan finishes, hand the sheet off to the site-mockup-dm sales skill for outreach. Watch out for: Google Places API quota is generous but not free. Budget ~$0.017 per Place Details call. A scan of 5,000 places is ~$85. Cache results — don't re-scan a postcode you scanned last month unless you want to catch newly opened businesses. Some businesses have a site but never linked it on Google Maps. The SERP check + a quick Bing fallback catches most. Confidence score lets you skip the low-confidence ones in outreach. Local laws on cold outreach vary. Phone numbers and email addresses scraped from public listings are generally fair game, but DMs over Instagram/Facebook fall under each platform's terms. The sales-side tip handles channel choice; the scanner is just producing the list. Don't blast the whole list in one day. Sales-side cadence is 10-15 outbounds/day to protect deliverability — feed the list in slowly. Re-run the scanner monthly per geography. New openings, closures, and businesses that finally got a site all need to be tracked. Skill file: Builds on
agent-browser+ a thin Google Places API wrapper. Pattern only — no standalone skill in this version. Tell your agent to write one from this recipe. Cross-link: Pairs with Sales Section 5 — Tip 5.5 (site-mockup DM), which is the outreach half of this loop. For the story version of this end-to-end play, see Wild Examples #2 — The Site-Mockup Cold DM.
How it all stacks
The three big sections — Content, Image Gen, SEO — share infrastructure. The same scraping muscle that powers Tip 1.1 powers Tip 3.2. The same voice skills that draft LinkedIn
posts in Tip 1.4 can be retuned for ad copy in Tip 3.3. The same site-deployment access that lets the agent push blog posts in Tip 4.3 lets it test landing pages in Tip 3.6. And the scanner in Tip 5.1 feeds the sales outreach loop with pre-qualified leads the rest of the playbook then nurtures. Build once, redeploy across the marketing stack. The order to install:
- Get the voice skills first (Tip 1.4). Without them, everything else outputs generic content.
- Then the scraping layer (Tip 1.1). Once you have it for YouTube, the LinkedIn, Twitter, and ad library scrapers all use the same pattern. The Appendix below lists the opensource repo per platform.
- Then the morning loop (Tip 1.6). This is where the agent earns its keep daily.
- Then the image factory (Tip 2.1, 2.2). Doubles the reach of everything you've already automated.
- SEO (4.1–4.5) once you have a site worth optimizing.
- Ads (3.1–3.6) last. Burns cash if you don't have the rest dialed in.
- Lead scanner (5.1) any time you want to feed the sales loop a fresh batch of prequalified prospects — runs independently of the content/SEO stack. Don't try to ship all 18 tips in a week. Pick the 3 highest-leverage ones for your business this quarter. Get them rock-solid. Move on.
Appendix — Social Media Scrapers (open-source, one per platform)
Give this URL to your agent and tell it to install it. That's the whole UX. Hand the agent the repo, it reads the README, installs the dependencies, and you're scraping within minutes. One repo per platform — the easiest-tosetup, most-maintained options as of this writing. All open source, all installable without paid API keys (each platform's own anti-scraping defenses still apply — see watch-outs at the bottom). Platform | Repo | What it does Platform: YouTube • Repo: https://github.com/yt-dlp/yt-dlp • What it does: The de facto YouTube downloader (150k+ stars). Pulls video, audio, subtitles, metadata. Supports 1,000+ sites, not just YouTube. The audio-extraction backbone of Tip 1.1. Platform: Instagram • Repo: https://github.com/instaloader/instaloader • What it does: Downloads pictures, videos, captions, and metadata from Instagram profiles, stories,
highlights, and hashtags. CLI + Python library. Platform: LinkedIn • Repo: https://github.com/joeyism/linkedin_scraper • What it does: Python library that scrapes LinkedIn for user and company data. LinkedIn fights scrapers harder than any other platform — expect to use a burner account and rotate. Platform: X / Twitter • Repo: https://github.com/d60/twikit • What it does: Talks to Twitter's internal API without a paid API key. Search, timelines, user info, even posting. Actively maintained. Use moderate rate limits — aggressive scraping gets accounts suspended. Platform: TikTok • Repo: https://github.com/Evil0ctal/Douyin_TikTok_Download_API • What it does: High-performance async TikTok and Douyin scraper. Supports API calls, batch parsing, and bulk video downloads. Platform: Reddit • Repo: https://github.com/Serene-Arc/bulk-downloader-for-reddit • What it does: Archives Reddit content in bulk — saved posts, subreddits, user profiles, comments. Reddit also has a free official API (PRAW); use this one when the official API rate limits bite. Platform: Facebook • Repo: https://github.com/kevinzg/facebook-scraper • What it does: Scrapes Facebook public pages without an API key. Facebook breaks scrapers more often than most platforms — pin a version and expect to patch. Platform: General-purpose (any site) • Repo: https://github.com/D4Vinci/Scrapling • What it does: Adaptive scraping framework that handles anti-bot, JS rendering, and auto-adapts when sites change their HTML. Use when no platform-specific scraper exists or when you're scraping niche news sites for Tip 1.5. Platform: Cross-posting (not scraping) • Repo: https://github.com/gitroomhq/postiz-app • What it does: Open-source social media scheduling and analytics platform. Not a scraper — useful on the output side once your agent is drafting posts across platforms. Wiring any of them into a skill (one-time setup, reusable everywhere): 1. Hand your agent the repo URL with the instruction: "install this and save the install steps as a skill called
Every platform's terms of service prohibit some form of scraping. The legal reality is more nuanced (public data is generally fair game in most jurisdictions; logged-in scraping with a fake account is not). When in doubt, scrape only public data, rate-limit aggressively, and don't republish content verbatim — harvest patterns, transcripts, and metadata. Pin versions. yt-dlp and the others ship breaking changes regularly. Let the agent selfheal: tell it "if a scrape fails, update the package and retry once before failing loud." LinkedIn and Facebook are the hostile ones. Expect burner accounts, residential proxies, and ongoing maintenance. The other six platforms are an order of magnitude easier. Don't run these from your main IP block at scale. Cloud function + IP rotation, or a residential proxy provider if the volume justifies it.
Sales
Newsletter
Get the next one in your inbox.
One short email a week. Operator takes on AI agents, no hype.