Stop Learning AI. Start Learning Management.
The most important skill for working with AI agents isn't technical. It's people management. How frameworks from Julie Zhuo and Kim Scott transformed my AI workflows.


Every week, my feeds are flooded with the same thing: how to build your AI agent army. Set up twelve specialized sub-agents. Orchestrate swarms of OpenClaw instances. Wire up MCP servers to create an autonomous empire of digital workers. The Twitter discourse is buzzing. The YouTube tutorials are multiplying. Everyone's obsessed with the mechanics of agent orchestration.
And most of it is missing the point entirely.
When people ask me how I get so much out of my AI agents, my personal assistant RonBot, my Claude Code workflows, the system I've built around RonOS (my Obsidian-based second brain), they expect me to share a prompting trick or a new tool. Instead, I point them to a book written by a former Facebook design VP in 2019.
The skill that matters most in the age of AI agents isn't technical. It's people management. And it's been staring us in the face the whole time.
TL;DR: The people struggling with AI agents are making the same mistakes as first-time managers. The fix isn't a better prompt — it's a management book from 2019. Two frameworks (Julie Zhuo's Three P's and Kim Scott's Radical Candor) map almost perfectly to how you should be working with your agents. The best part? These skills don't get deprecated every two weeks.
The Firehose
Here's the thing I keep hearing from friends, colleagues, and people in my community:
I can't keep up.
And honestly? I feel it too. The pace right now is relentless. A developer survey from The Pragmatic Engineer dropped on March 3rd showing that Claude Code has become the most-used AI coding tool, overtaking both GitHub Copilot and Cursor in just eight months. Seventy-five percent of engineers at smaller companies now use it as their primary tool. Two days later, OpenAI released GPT-5.4, which scored 75% on the OSWorld benchmark for autonomous desktop tasks, surpassing the human expert baseline of 72.4% for the first time. AI isn't just writing text anymore. It's clicking buttons, filling forms, navigating applications, and completing multi-step workflows on its own.
Context engineering, the discipline of curating what information an AI agent sees so it produces better results, is becoming a formal field. Anthropic published a guide on it. Manus shared hard-won lessons from building their agent framework. Companies hiring their first "AI employees" are discovering a familiar truth from human onboarding: skills aren't the problem, context is.
Meanwhile, a Sia Partners analysis from this month identified what they call the "vibe coding productivity paradox." In advanced environments, more than 95% of code can now be AI-generated. In theory, this should lead to dramatic productivity improvements. In practice, most organizations report only incremental gains. The bottleneck isn't how fast AI can generate code. It's everything around it.
It feels like a technology race. And we're all sprinting to stay current.
But counterintuitively, the thing that's actually working for me isn't keeping up with the latest model drop. It's going back to basics. Dusting off management books. Refreshing myself on the fundamentals of how to lead people and get great work out of a team.
That's what's changing how I work with AI.
The Realization
Some context on me: I've led teams of PMs at Meta. I've hired, onboarded, coached, given feedback, run 1:1s, all the things that come with being responsible for other people's output. And even beyond formal management, the PM job is fundamentally about leading without authority. You're collaborating across disciplines, influencing outcomes you don't directly control. Every day.
In parallel, I've been building my personal AI system. RonOS is my Obsidian-based second brain. RonBot is my personal AI assistant running on OpenClaw, connected to me via Telegram, handling everything from daily briefs to meal logging to morning health check-ins. And I use Claude Code daily for my blog, my projects, and my work.
The more I use AI agents, the more I notice something: I'm doing a lot of the same things I'd done as a people manager.
I was onboarding them, giving them context about who I am, what I care about, how I work. I was coaching them, providing specific feedback when their output wasn't right. I was watching them improve over time as they accumulated context and I built better documentation.
And when I look at friends and colleagues who are struggling to get good results from their agents? They're making the exact same mistakes I'd seen from first-time managers. I could feel it viscerally, because I remembered making those same mistakes myself.
They were skipping context. They were giving vague feedback. They were expecting great output from agents who had no idea what the goals were.
Well guess what, you're a people manager now.

In a previous post, I wrote that vibe coding made everyone an engineer. Then I wrote that the 10-80-10 framework meant the real skill was product management, knowing what to build, not just how to build it.
Here's the next reveal: if you're using AI agents, you're a people manager. You are taking accountability for work that you're not directly doing. That is, quite literally, the definition of management.
And unlike the latest model drop or framework update, management principles don't change every two weeks.
The Three P's
Julie Zhuo was VP of Product Design at Facebook. Her book The Making of a Manager boils the job down to three things: Purpose, People, and Process: the why, the who, and the how. A manager's role is to continually improve these three levers to get the best possible outcomes from a group of people working together.
I've found this maps almost perfectly to how I set up and work with my AI agents.

Purpose: The Why
In management, you ensure your team knows what success looks like and cares about achieving it. You connect their daily work to the company's mission and strategy. You explain why you're building what you're building, not just what to build.
With AI agents, this is your system prompt, your CLAUDE.md file, your project instructions. But it's also your day-to-day prompting. When I'm prompting Claude Code or OpenClaw, I'm spending real time (two, three minutes at least) explaining the why behind what I'm asking for.
When I set up RonBot, I didn't just tell it what to do. I gave it the full picture: my goals, my values, what I'm trying to achieve with my health, my writing, my career. When I use Claude Code for my blog, I don't just say "write a post" (trust me, I've tried). I share the content strategy, the target audience, what's performed well before, what hasn't landed.
The mistake most people make is jumping straight to tasks. "Write me a PRD." "Draft this email." "Build this component." They skip the purpose entirely.
Imagine hiring a new PM and on their first day saying "write a PRD" without ever explaining what the product is, who the customer is, or what the company is trying to achieve. You'd never do that to a human. But that's what most people do with their agents every single day.
Try this: Before giving your agent any task, spend 2-3 minutes explaining the why. Why does this matter? What does good look like? What's the broader context? You may be surprised how much this alone improves your results.
People: The Who
In management, you help your reports understand where they sit in the organization. Who are their cross-functional partners? Who are the stakeholders? What are the team dynamics? What does each person care about?
With AI agents, this is the relational and organizational context: the people landscape that shapes how work gets received and evaluated.
One thing I've started doing regularly is asking my agent to critique a document from the perspective of a specific stakeholder. But for that to work, the agent needs to understand where that person sits in the organization, the level they operate at, the span of their coverage, and what matters to them. I spend real time building that "org chart" context — not just once during setup, but day to day as the landscape shifts.
The mistake people make is treating the agent as if it exists in a vacuum. No organizational context, no stakeholder awareness, no sense of who the work is for or who will be evaluating it.
Try this: Give your agent an "org chart" of sorts: who are the key people, what are their roles, what do they care about? When asking for feedback on your work, specify whose perspective you want it from. The difference is immediate.
Process: The How
In management, you establish how the team works together. Decision-making frameworks, execution cadence, reporting structures, tools and systems. You codify how things get done so the team can operate without you being a bottleneck.
With AI agents, these are your skill files, your hooks, your workflows, your MCP configurations. It's the "here's how we get work done" documentation.
My Claude Code setup has skills for blog post researching, editing, and distribution. RonBot has routines for daily briefs, daily note logging, meal logging. These are documented processes the agent follows consistently, the same way a well-run team operates from shared playbooks.
The mistake people make is expecting the agent to figure out the process on its own, or re-explaining it every single time. They're essentially re-onboarding the same employee every morning.
Try this: When you find yourself repeatedly giving the same instructions, turn them into a skill file or system prompt addition. You do it once, and every future interaction benefits. That's the team playbook.
Radical Candor for Agents
Kim Scott's Radical Candor gives us a 2x2 framework for feedback built on two dimensions: Care Personally and Challenge Directly. Most people know the sweet spot — Radical Candor itself — but understanding the other three quadrants is what makes the framework click.
It maps perfectly to how most people interact with their agents.
The three traps
Manipulative Insincerity happens when you neither care nor challenge. In management terms, it's the disengaged boss who doesn't invest in their people and doesn't hold them accountable. With agents, this is the low-effort prompt: you dash off a quick sentence, accept whatever comes back, and move on. No context invested, no feedback given. This is what I've called "low-effort prompting leads to AI slop." You get what you put in: nothing.
Obnoxious Aggression happens when you challenge without caring. With humans, it's the boss who says "this is garbage" without explaining what's wrong or what good looks like. With agents, it's the vague rejection: "This is terrible, try again." No specifics, no context for why. The agent can't learn from your dissatisfaction if you don't articulate what's behind it, just like an employee can't grow from a manager who only says "do better."
Ruinous Empathy is the sneaky one. You invested great context upfront. You did the "caring" part. But you accept mediocre output because iterating feels like too much work. "Close enough." "I'll just fix it myself." This is where most people live. They put in the effort on the front end, then let the back end slide. In management, it's the boss who likes their team but never pushes them to grow. The work stays average.
The payoff
Radical Candor is when you invest deep context and give specific, direct, honest feedback. "This draft is too verbose. Specifically, these sections are irrelevant to the critical argument. Here's how I'd actually say this. And the second section is missing the evidence that makes the argument feel real, not theoretical."
That's where the best results come from. Every time.
The key insight is the same whether you're managing humans or agents: the quality of your feedback directly determines the quality of future output. Vague feedback produces vague improvements. Specific feedback produces specific improvements.
Try this: When reviewing agent output, don't just say "this isn't right." Say what specifically is wrong, why it's wrong, and what "right" looks like. Then (and this is critical) save that feedback as context for next time. You're building the agent's institutional knowledge.
I share frameworks like this every week in The Degenerate, my newsletter about AI productivity, building in public, and whatever I'm experimenting with next.
The Voice Dictation Secret

One practical tip that has dramatically improved my results, and most people I share it with haven't tried it: use a voice dictation tool like Wispr Flow or Superwhisper to provide context to your agents.
When you type a prompt, you tend to keep it short. A sentence or two. Maybe a paragraph if you're being thorough. But when you talk, something different happens. You naturally provide richer context. You explain the why behind things. You think out loud. You share nuances and tensions and uncertainties that you'd never bother typing out.
My actual workflow: I pace around talking to SuperWhisper for five to ten minutes, stream of consciousness, explaining what I'm thinking, what I want, what concerns me, what I've seen that's relevant. That gets transcribed and sent to Claude. The result is dramatically better than anything I'd get from a typed prompt because the input was dramatically richer.
This is exactly what a good 1:1 looks like. A great manager doesn't send their report a Slack message that says "make this thing better." They sit down, talk through the context, share their thinking, ask questions, have a real conversation. Voice dictation lets you have that same richness with your agent.
Try this: Next time you're about to type a long prompt, try talking it through instead. Spend 3-5 minutes explaining the full context out loud. Let the transcription capture the nuance. You'll be surprised by how much better the output is.
The Compounding Effect
When you apply management principles to your agents, the results don't just improve. They compound.
Think about how feedback works with a great employee. You give feedback. They incorporate it. They produce better work. You give more refined feedback. They grow. Over months, a good employee becomes great because they've internalized your standards, your context, your way of thinking. The early investment pays dividends long after the initial conversation.
The same thing happens with agents, if you're intentional about it.
You provide context. The agent produces output. You critique it specifically. That critique becomes a new skill file, a memory, a documented preference. The agent references its own improved work in the future. The quality ratchets up. And up. And up.
The key is making sure the work gets captured. Don't just ask Claude for answers in a chat that disappears. Have Claude write its thinking to a markdown file. Review that file. Critique it. Save that critique. In the future, Claude Code or any of your agents can reference that file as context, and the work builds on itself.
This is your second brain in action. It's not just a knowledge repository. It's institutional memory for your AI team.
Early in my blog editing process with Claude, I was correcting the same things repeatedly: tone, voice, formatting preferences, the tendency to use corporate buzzwords I'd never actually say. Once I turned those corrections into documented preferences and skills, the baseline quality jumped. Now we spend our time on substance and strategy, not basics. Just like a senior employee who's past the onboarding phase.
This is also where the 10-80-10 framework really comes alive. The first 10% (perspective, framing, context-setting) gets better over time because the agent retains and builds on it. The middle 80% gets better because the agent has more context to work with. The final 10% (taste, judgment) becomes more refined because you're spending less time fixing basics and more time shaping substance.
Try this: If you haven't already, start building your second brain, a persistent knowledge base that your agents can reference and build on. And remember the 10-80-10 framework: invest heavily in the first 10% (context-setting) and the last 10% (review and taste). That's where your management skills create the most leverage.
The Liberating Truth
We started with the firehose. New models every two weeks. New tools every month. New frameworks, new benchmarks, new capabilities. The constant, exhausting feeling of falling behind.
A liberating mindset shift is that the most important skill for working with AI agents is one that's been studied and documented for decades. It's people management. It's in books you can buy for $15 on Amazon. It doesn't have a version number. It doesn't get deprecated.
The progression has been building for a while now. First, AI made you an engineer. Vibe coding put real development capability in the hands of anyone willing to try. Then it made you a product manager because the hard part wasn't building things, it was knowing what to build and applying taste and judgment to the output. Now it's making you a people manager because the agents are here, they're capable, and the bottleneck is your ability to set context, provide feedback, and build the kind of institutional knowledge that turns a new hire into a trusted team member.
As agents get more capable, this management challenge only grows. We'll move from managing one agent to managing teams of them. The people who invest in management fundamentals now, who learn to set purpose, build organizational context, document process, and give specific, honest feedback, will have an enormous advantage.
So the next time you feel the panic of falling behind on the latest AI development, consider this: maybe what you actually need isn't a new tool. Maybe it's a management book.
Here are four places to start:
- The Making of a Manager by Julie Zhuo, for the Purpose, People, and Process framework
- Radical Candor by Kim Scott, for the feedback framework that transforms agent interactions
- Management as AI Superpower by Ethan Mollick, for the academic perspective on why management skills are the real AI advantage
- Effective Context Engineering for AI Agents by Anthropic, for the technical deep dive on how context shapes agent behavior
Keep Reading
- The Degenerate's Guide to Vibe Coding — How I build software without being a software engineer
- Something Big Is Happening, and Taste Is More Important Than Ever — The 10-80-10 framework for AI-assisted work
- This Is Your Second Brain on OpenClaw — How I built a personal AI assistant on top of my knowledge base
I'm building my AI productivity system in public. If you want to follow along with the experiments, setups, and lessons I don't post publicly, join my newsletter.
😈 Subscribe to The Degenerate
I'm building my AI productivity system in public and documenting everything. Follow along for weekly experiments with Claude Code, Obsidian, and whatever I'm building next.