Skip to main content
anmdotdev
0

Power User Mode: Running 3 Agents in Parallel

8 min readAI, Workflow, Productivity, Developer ToolsAnmol Mahatpurkar

The first time I ran three agents in parallel, it felt slightly absurd.

Three terminals. Three copies of the repo. Three features moving at once. Questions arriving from one session while another was finishing a draft and a third was midway through a test pass.

It looked chaotic.

It was also the first time I really felt the compounding effect of the workflow I had been building.

Voice prompting, scenarios, dry runs, browser feedback, spec-first direction, review discipline. Those techniques are useful individually. But once they are stable enough, something else becomes possible:

parallel execution.

That is where the productivity jump stops feeling incremental.


Parallelism Is Not the First Trick. It Is the Last One

This is the important caveat.

Running multiple agents at once is not the starting move. It is the thing you do once the single-agent workflow is already reliable.

If your prompts are vague, if your feedback loop is weak, if your review discipline is inconsistent, then three agents just multiplies the chaos.

Parallelism only works when the earlier layers are already in place:

  • strong prompts
  • bounded tasks
  • scenario definitions
  • browser or test feedback
  • good review habits

Without those, you are not scaling throughput.

You are scaling confusion.


What the Setup Actually Looks Like

The infrastructure is boring, which is part of why I like it.

I use multiple copies of the same repo in separate directories. You can do this with multiple clones, multiple worktrees, or any setup that gives each agent an isolated workspace.

The point is simple:

  • each agent gets its own environment
  • each environment can change files freely
  • I can review and merge the results independently

That is enough.

You do not need some grand multi-agent orchestration platform.

You need clean isolation and a human in the middle who still understands the architecture.


The Real Bottleneck Is You

This is the key insight people miss.

The agents are usually not the bottleneck.

You are.

The model can keep generating, testing, and iterating. The limiting factor is your ability to:

  • frame the task clearly
  • answer clarifying questions
  • review the output
  • merge decisions back into the codebase

That is why there is a natural ceiling.

For me, three agents is usually the sweet spot.

One agent leaves too much idle time.

Two is good.

Three keeps the pipeline full without destroying my attention.

Beyond that, the context-switching cost starts eating the gain.

The problem is no longer "can the agents work faster?"

It is "can I still direct them well?"


Why Three Works

Three works because it fits the rhythm of the work.

A typical cycle looks like this:

  1. Agent 1 gets a feature task.
  2. While Agent 1 explores or implements, I prompt Agent 2.
  3. While both are moving, I prompt Agent 3.
  4. By the time I have done that, Agent 1 usually has questions or a first draft.
  5. I switch, answer, review, or redirect.
  6. Then Agent 2 comes up.
  7. Then Agent 3.

That rotation feels natural.

There is almost always something ready for my attention, but not so much that I lose track of the state of each task.

That balance matters more than some theoretical maximum number of sessions.

You are not trying to maximize open windows.

You are trying to maximize useful flow.


The Type of Work That Parallelizes Best

Not all tasks should be run in parallel.

The best parallel tasks are:

  • well-bounded
  • low dependency on each other
  • easy to review independently
  • unlikely to produce heavy merge conflicts

Examples:

  • one agent on a UI feature
  • one agent on tests
  • one agent on infra or supporting refactors

Bad candidates:

  • three agents changing the same state model
  • multiple agents touching the same component tree heavily
  • tasks where the output of one task defines the shape of the next

The rule is simple:

parallelize implementation, not uncertainty.

If the architecture is still moving, keep the work local or sequential until the direction is stable.


The Best Role Split

I have found that the cleanest parallel pattern is not "three random features."

It is usually something like:

  • Agent 1: primary feature
  • Agent 2: adjacent feature or supporting refactor
  • Agent 3: tests, verification, or review

That third slot is underrated.

A lot of people imagine three builders.

Often the highest leverage is actually:

  • one agent building
  • one agent building something else
  • one agent validating or reviewing

That keeps the overall system healthier.

Because the goal is not just more code.

It is more trustworthy output.


Cross-Model Review Is Where This Gets Interesting

One of my favorite uses of parallelism is review by a different model.

If one model produces the implementation, another can review the diff with a fresh set of blind spots.

This works much better than it sounds.

Not because the second model is magically objective, but because different models miss different things.

So a common pattern for me is:

  1. Agent 1 implements the feature.
  2. Agent 2 reviews it critically.
  3. I feed the review back to Agent 1.
  4. Agent 1 fixes.
  5. Agent 3 validates in the browser or through tests.

That is a powerful loop.

And it is exactly the kind of loop you cannot easily run if you are thinking only in single-session terms.

Parallelism is not just about more implementation.

It is also about more feedback, sooner.


The Merge Problem

The obvious downside of multi-agent work is integration.

If multiple sessions are changing related parts of the codebase, merge friction can erase your gains quickly.

That is why codebase hygiene matters even more in parallel mode:

  • linting
  • strict types
  • tests
  • disciplined diffs

These are not abstract best practices here. They are operational necessities.

I want every merge step to answer one question fast:

did the combined result stay correct?

If the answer comes back quickly from types and tests, the workflow stays fluid.

If not, you start burning time in reconciliation.

The better your verification layer, the safer the parallelism.


A Typical Day in This Mode

A good day in this workflow does not feel like "AI did everything."

It feels like I am constantly routing work through a system.

I spend most of my time on:

  • framing tasks
  • making architectural calls
  • answering clarifying questions
  • checking diffs
  • deciding what gets merged

Very little of my time is spent typing implementation from scratch.

That is the deeper shift.

Parallel agents make it impossible to ignore that your role has changed.

You stop thinking like a solo typist.

You start thinking like a reviewer, architect, and traffic controller.

That is why I call this power user mode.

Not because it is flashy.

Because it forces you into the higher-leverage parts of the job.


The Honest Caveats

This mode is not free.

It has costs:

  • more cognitive load
  • more merge discipline required
  • more temptation to let mediocre work slip through because throughput feels high

It can also create false confidence.

You can feel productive because many things are moving, while quality quietly drops.

That is why I keep coming back to the same rule:

throughput only matters if review quality holds.

If I cannot still review sharply, then the extra agent is hurting, not helping.

That is also why I think three is the practical ceiling for me most days.

It is not a technical limit.

It is an attention limit.


When to Start Using This

My advice is simple:

Do not start here.

Start parallel work only after:

  • you trust your single-agent prompting
  • you have a solid feedback loop
  • you know how to review AI output properly
  • your codebase has real guardrails

Once those are true, add a second agent.

Then maybe a third.

Treat it like scaling any other system:

stabilize first, then add concurrency.

That approach is much less glamorous than jumping straight to five agents and a dashboard.

It is also much more effective.


What Power User Mode Really Is

From the outside, this workflow can look like "the AI is writing code while I supervise."

That description misses the point.

What is really happening is that I am decomposing engineering work into a set of clear, reviewable, testable streams and letting multiple agents execute inside those lanes at once.

That is a very different thing.

It is not laziness.

It is leverage.

And if the earlier parts of the workflow are strong enough, it works shockingly well.


Was this helpful?
0

Newsletter

Get future posts by email

If this piece was useful, subscribe to get the next one in your inbox.

No spam. Double opt-in. One email per post.

Discussion

0/2000