The Manual · Ch 18
The click moments — when you become dangerous
The shifts that mark when you've gone from "I don't get this" to "this is doing my job for me."
Become Dangerous
Some users get the agent in three days. Some take three months. The difference is not technical aptitude. It is whether they have hit certain "click moments" — moments where the way they relate to the agent changes. There are three click moments to watch for.
Click 1: You stop writing tiny prompts
Beginners type one-line prompts. "Send a follow-up." "Schedule a call." "Summarize this." The agent does an okay job and the user concludes "AI is meh." The first click is when you start planning bigger tasks instead of giving short crappy prompts. You write a paragraph. You include context. You tell it the goal, not just the action. You ask it for clarifying questions. They start planning bigger tasks instead of just giving it short crappy prompts. When that becomes default behavior, the output quality jumps an order of magnitude. The same agent, the same tools, just used like you mean it.
Click 2: You start making skills
The second click is the moment you stop just using the agent and start teaching it. They start making skills and they understand what a skill is. Skills are very important and they're very easy — you just tell the agent to make it after you do something useful that you will repeat. It usually happens after a frustrating moment. You did something for the third time and felt the friction. You said, "wait, save that as a skill so I don't have to explain it again." It worked. Next time you ran it. It worked. You realize you can do this for everything. From there, your agent is no longer a chatbot you talk to. It is a system you are building.
Click 3: You ask the agent for ideas
The third click is when you stop driving and let the agent help drive. They ask the agent to come up with ideas. They ask what it could be doing for them. They ask it to improve itself. Instead of short prompts they didn't think about, they ask
the agent a lot of questions about how it can help, and build on the answers — instead of trying to invent every idea themselves. Examples of click-3 prompts: "Based on what we've worked on this week, what should I be doing that I'm not?" "What skills are you missing that would make me faster?" "Look at my last 100 messages. Spot the pattern. What is annoying me that you could automate?" "Improve yourself. What rule should I add? What skill should I write? What should I cut?" When you start treating the agent as a partner in deciding what to work on — not just a tool for executing what you decided — that is the third click.
How to know you've crossed over
You have crossed over when all three are true: You write paragraph-length context-rich prompts as your default. You have at least five skills you genuinely use, and you have iterated on at least one of them. You ask the agent for ideas and improvements at least weekly, and you act on the ones that matter. If yes to all three, you are dangerous. You are not "using AI." You are running an AI organization of one.
The fourth click (and why it matters)
There is a fourth click that only happens for some people. It is the moment you realize you can help someone else set theirs up. The first time someone says "my agent isn't responding" and you know exactly what to ask — "is it the typing-then-stop bug? Run the audit prompt. Ask it what tool call it just made." — you have crossed from user to operator. Most people don't need to cross this line. But if you do, it changes the math: instead of one agent compounding for you, you are now compounding agents for a small network of people. The skills you write get shared. The diagnostic prompts get reused. The patterns spread. If that interests you, the path is simple: keep notes on every problem you run into, every prompt that fixed it, every skill you wrote. After three months you will have a manual of your own. (That is, in fact, where this manual came from.)
What it feels like on the other side
On the other side of the three clicks, the agent does not feel like a tool anymore. It feels like an extension of your work. It's AGI as far as I'm concerned. No one gets it till they talk to it for a long time. Strong language from someone deep in it. The point is real: the experience changes once you cross over. You stop thinking about the agent. You think about your work, and the agent is where it happens. The three behavior shifts that mark the moment the agent stops being a toy and starts running your work.
Newsletter
Get the next one in your inbox.
One short email a week. Operator takes on AI agents, no hype.