Research. Plan. Implement.

I’ve been vibe coding for a while now.

And for a long time, it felt like a coin flip. Sometimes the AI nailed it. Sometimes it went completely off track and I had no idea why.

Then I tried Research → Plan → Implement a pattern introduced by Dexter Horthy, CEO of HumanLayer. It’s been working pretty well, so I figured I’d write it up.

Here’s what it is, how I set it up in Cursor, and why it makes sense.

First, the setup

I use three simple commands in Cursor: /research, /plan, and /implement.

All the files they generate go into a .rpi/ folder at the root of my project:

.rpi/
└── research-docs/
└── plans/

That’s it. It doesn’t touch your codebase. Everything stays organised and out of the way.

Now here’s how you actually use it.

Step 1 /research

Open a new conversation. Type /research and describe the feature you want to build.

The agent goes through your codebase and puts together a research document the relevant files, context, and code snippets it thinks it’ll need. Nothing extra.

You read it. If something’s wrong, fix it. If something’s missing, add it. Then you approve it.

This matters because you know exactly what’s going into the next step.

Step 2 /plan

New conversation. Type /plan and give it the research file.

Here’s the key thing: the agent only looks at the research doc. Not your codebase. No distractions, no extra noise.

It gives you a clear step-by-step plan for how it’s going to implement the feature.

You read it before a single line of code is written. If something looks off, you change it now not after untangling 300 lines of generated code.

Step 3 /implement

New conversation. Type /implement and pass in the plan.

The agent now has a tight, clean context. It knows exactly what to build, in what order, and why. No guessing.

And because you’ve already been through the research and the plan, you know what’s coming. So when it hits a problem, you can actually help instead of just staring at it.

Why this works

The output you get from AI is only as good as the context you give it.

Most people give it everything at once the feature, the codebase, the vague hope that it figures it out. That’s why it often goes sideways.

This method gives each step exactly what it needs. Nothing more. The LLM isn’t guessing or getting distracted. It’s working from a clean brief every time.

And the useful part? You understand the solution before it exists. That means fewer surprises, less cleanup, and more confidence in what gets shipped.

One thing to keep in mind

This is most useful for medium to large features. For small changes, just ask directly models are good enough now that you don’t need all three steps.

Save the RPI flow for the work that actually matters.

I’ll share my exact Cursor command templates in the comments if enough people are interested.

Have you tried separating research, planning, and implementation into different conversations? Would love to hear how it went.