The Manual · Ch 03
How to talk to it (prompting basics)
Plain English wins. The handful of patterns that get you 90% of the way to a good prompt.
Basics) The two rules that beat everything else
Rule one: be specific. Good prompts are specific. They can start simple, then planned, then refined. You don't need a wall of context upfront — start simple, ask the agent to plan before doing, then go. Rule two: extreme when telling, neutral when asking. When telling, EXTREME language. When asking, NEUTRAL language. When you want it to do something, do not give it permission to be mediocre. Mediocre: "Write me a prompt that can help make my life easier." Strong: "Write me the absolute best possible prompt that would directly make my life easier instantly." Same task. The second one gets dramatically better output, every time, because you took away the option to phone it in. When you want its opinion, do the opposite. Strip your bias out. Bad: "We should use Slack for this, right?" Good: "Do you think we should use Slack for this? Is there a better idea?" If you ask "we should do X, right?" you're biasing it toward agreeing with you. Give it genuine freedom to decide and it thinks harder, pushes back sometimes, and gives you answers you would not have thought of.
Tell, do not just describe
Most weak prompts fail because they describe an outcome instead of issuing an instruction. "I want better emails" is a description. The agent has to guess what better means, what email you're talking about, and what to actually do. The output is going to be generic. A direct instruction strips the guesswork. Weak: "I want better emails." Strong: "Rewrite this email so the ask is in the first line and the rest is one short paragraph." Weak: "My CRM is messy."
Strong: "Open my CRM, find every deal with no activity in 14+ days, and draft a followup to each one in my voice." Same task in both rows. The strong version tells the agent exactly what to do, with what input, in what shape. You'll get the output you actually wanted instead of an interpretation.
Rant via voice message — my favorite thing
My favorite way to use the agent. Open the chat, hold the record button, and just talk. Ten seconds. Five minutes. Ten minutes. Doesn't matter. Length is irrelevant. The longer and looser, the better the output. What goes in: everything. All the context, ideas, half-formed thoughts, the thing you've been chewing on in the car, the bit you don't quite know how to phrase. Don't structure it. Don't restart when you trip over a word. The agent untangles all of it on the other side. Then the twist that makes it land: at the end of the rant, tell it to go pull more context from your other tools. "Check my past conversations on this. Look at the docs in my knowledge base. Pull the relevant threads from my email." It will. Now it has your dump plus everything you've already said about this elsewhere — which is way more than you could've fit in any typed prompt. Then let it run. Here's a voice message with everything on my mind about [thing]. Check past conversations and docs we have on this for more context. Then do the thing — propose the plan, draft the doc, build it, whatever makes sense. Go.
Why it works: typing forces you to self-edit. Voice doesn't. You free-associate, jump between ideas, throw in the asides — and the asides are usually where the real context lives. Plus once the agent stitches your rant together with everything else it can find, the output is grounded in your business, in your words, with none of the generic-AI-slop you get from a clean three-line prompt. (Chapter 9 is the deep dive on the braindump pattern. This is the prompting basics version: just talk.)
Plan first, execute second
For anything more than a one-shot task, separate planning from execution. I want to do [thing]. Don't start yet. First, ask me clarifying questions until you have everything you need. Then write me a plan. Then I'll tell you to go.
Three benefits: 1. The agent surfaces gaps before it wastes work on the wrong thing. 2. You see the plan and catch the misunderstanding cheaply. 3. You end up with a written plan you can refer back to. This is the standard fix for the most common failure mode: the agent runs off and builds the wrong thing because the human under-specified. Useful question phrases to keep in your back pocket while the plan is forming: "Is there a better way to do this?" "Anything you'd add to this?" "What about [angle I haven't considered]?" "Before you start, what would you change about this plan?" Here's the underrated part: while you're asking these questions and the agent is expanding on your idea, you also get better ideas. The questioning isn't just to brief the agent — it pulls more context out of your own head than you would have typed in one go. Two or three rounds of back and forth, and the task looks different (better) than it did when you first opened the chat. When the plan is finally solid, the trigger is simple: "OK, go do it." That's the line. Up until that point, the agent should be planning, not executing. After it, the agent runs.
The meta-prompt move
Sometimes you do not know what the right prompt is. Ask the agent to write it for you. Write me the absolute best prompt for [task]. Don't run it. Just write the prompt. I'll review it before I use it. That is meta-prompting. The model is better at writing prompts for itself than you are. You can go one level deeper. Meta-meta prompting: Write a prompt that would make you write 10 prompts that would instantly make you work proactively on the most important things that would directly make my life easier. The agent generates ten prompts, each a permission slip to do something useful. One caveat: do not blindly paste a meta-prompt result back in. Read it first. Make sure you are OK with what it is about to do.
Convince it when it says no
The agent will sometimes refuse a task on the first attempt — not because it can't, but because it is being cautious.
Tell it: "get it done by any means necessary." It will usually figure it out. The real exceptions are narrow: It does not have access to the relevant tool (the calendar isn't connected yet). It is being blocked by something external (a captcha, a Cloudflare check, an expired login). Rule out those two. Otherwise "get it done by any means necessary" almost always works (we will return to this in Chapter 5).
Talk to it while it's working
You do not have to wait silently for the agent to finish before adding context. If you are watching it run and you realize you forgot something, just type it. "Oh, also include pricing." "Skip the third part." "Use bullet points instead." The instruction reaches the agent midtask.
Don't stuff ten things into one message
The agent can multi-task. Two or three parallel asks per turn is fine. Past three, it starts dropping or duplicating. If you have a big batch, ask it to spawn sub-agents in parallel (Chapter 11), or break the request into a sequence. The prompting patterns that fix most weak output — be specific, tell don't describe, plan first.
Newsletter
Get the next one in your inbox.
One short email a week. Operator takes on AI agents, no hype.