Option Storming — The 3D Chess Behind Claude Code
Think it → it exists. That's the actual target.
TL;DR: I took a closer look at what the Claude Code team is doing. On the surface it looks like nifty automation — 10 terminals open, “a third of my engineering work is now on my phone.” Cute. But underneath that casual story there are at least three more layers. They build the machine that builds the code that builds their actual machine. And the deepest layer is something else entirely: they’re not brainstorming. They’re option storming. High speed discovery on a thing they simultaneously ship. 3D chess. The public story is layer one. I don’t know if they know that or if it’s just happening to them.
The ideal they work towards is: people literally create their software futures by speaking them out loud.
People don’t need to understand what I unpack here, they feel and experience it naturally - and that’s what makes it so addictive to many.
Watch any video of the Claude Code team at work. Multiple terminals open. Parallel threads running. Experiments spun up and lots of them killed right away.
On the surface: hyper busy. ADHD style — which they even mention themselves.
That reading hides what’s actually going on. And decoding what’s underneath the cool story is worth a few minutes. Because it reveals something most teams haven’t understood yet. And probably won’t for a while.
What you’re watching is the visual signature of a completely different way to build things. They call it engineering. I’d call it option storming.
Brainstorming is dead. Long live option storming.
Brainstorming assumes execution is expensive. So ideas get filtered early — by discussion, by judgment, by committee. Most things die before they touch reality.
“IT is the place where nice things go to die” Remember that? That might be a thing of the past.
Option storming turns this upside down.
When execution is cheap and speed is insane, you stop filtering before the experiment. You run the experiment. Any experiment. Now. The moment it crosses your mind. Let reality do the filtering.
That might sound subtle and obvious at the same time. And it changes almost everything.
The Claude Code team operates on exactly this logic. They execute basically every thought they have. Ideas that didn’t work 6 months ago? Try again. People starting work on that team are explicitly confused about the perceived “waste.” The answer: Why not? Cost of re-testing: trivial. There’s high chance the environment changed: new models, better agents, different usage patterns: 100% real. Let’s see, give it a try. Nothing to lose.
Historical rejection becomes weak evidence when the world keeps moving.
And that explains a lot more than just code. Unfiltered observation of customer complaints on social media. Insights from dogfooding inside Anthropic. Let’s just follow all signals. Now. What’s there to lose, again?
All early filters are gone. Why do deep user research when it’s faster to create the reality, look at it, and keep or dismiss? What’s the actual value of overthinking? That’s why it feels so alien.
If you’re still running design sprints to decide what to build — you’re solving last decade’s problem. The bottleneck moved. You just haven’t noticed.
The 3D chess behind the scenes
The simplified story goes: AI writes code, engineers review, productivity goes up.
Totally true. And totally missing the point. Three games are running at the same time.
Layer 1: Build the product.
Layer 2: Build the tools to build the product.
Layer 3: Build the exploration infrastructure that makes discovery cheap enough to sustain Layer 1 and 2.
That’s the meta. That’s what those open terminals represent — parallelized discovery threads, each one probing a different part of the design space.
The engineers aren’t just building software. They’re designing the machine that builds the software that becomes the machine. Claude Code.
And they’re doing it inside the machine.
Claude Code builds Claude Code. The development environment is recursive. You’re observing agents, inside the system, while evolving the system. What seems like engineering is research lab work with tight integration gates, working on a moving target.
The necessary background skill they built: leveling exploration code to production quality. So what gets created as an experiment can simply go live. That’s where the guardrails come in — so the code doesn’t need to be reviewed. One stage less. Again.
Hold that moment: They removed the review stage by making exploration production-grade. Rather than that being a local process improvement it really creates an entirely different architecture of work.
How this puts the customer at the center
In this model, internal reasoning is now only secondary input. The primary guide:
usage patterns
friction in real workflows
feedback from people actually using the thing, e.g. from internal dogfooding (Finance guys use it? Let’s create Claude Cowork for them. Because we can. Security? Let’s include a VM, who cares? Let’s see that thing. 10 days? Why think about it?)
The loop becomes: user signal → option implemented → real usage → keep or discard.
That’s why products built this way feel unusually well-tuned. They are not designed and filtered by assumption, but shaped by many small empirical corrections. Evolutionary selection can now finally win over big design upfront. Because: why think now? This is the next level of giving up the idea of control. Change user behaviour? Why? We can just give them what they need.
The shift to real evolutionary design of the machine that builds the machine — that’s the actual achievement. Not the code. Not the terminal count. The loop.
The real competitive advantage
What’s hard here isn’t generating ideas. Everyone can generate ideas.
What’s hard:
building fast evaluation loops
detecting real signal in the noise
filtering coherently
integrating what survives without creating chaos
The core competence moves from ideation and early-stage filtering to creating all options simultaneously and filtering as late as possible. Remember this: the core ability is now to filter as late as possible (or rather necessary!). The whole decision of what makes sense moves to the stage where the thing actually exists.
That is the revolution behind this.
It’s a pattern that shows up whenever generation becomes hyper cheap. Hardware: think Shenzhen where physical prototyping got so cheap that decisions moved to after the thing existed: cheapest prototypes enable late decisions. Software: think what’s happening right now.
The public narrative around “AI writes code” buries the real insight.
It’s not about the generation. It’s about what you do with the explosion.
The most powerful teams in these environments stop thinking of themselves as builders.
They become designers. More of this. Less of that. More like this? Let me look at the actual thing. And when the LLM doesn’t solve a use case cleanly? Never mind. Next one. The deficiencies of the models barely matter when your exploration surface is this wide.
That’s the game.
Most people watching from the outside are still counting terminals.
But here’s the part nobody wants to hear. If your business depends on fulfilling concrete promises — not exploration but delivery on commitments already made — this model might not work for you. At all. The Claude Code team can do this because their product IS the exploration tool. It’s turtles all the way down.
So the real question isn’t “how do I copy what Cherny’s team does.” The real question is: does your business model allow for massively parallel discovery? And if it doesn’t — would you dare to shift it so it does?
Because that’s where the divide is forming. And it’s getting wider fast.
What you needed to hear
What Cherny’s team is really working on, knowingly or not:
“The moment I think it, it exists”. Voice mode is just the latest approximation. If you think it and can speak it: it will exist.
And that’s what makes people so addicted to the tools.
People don’t need to understand what I unpack here, they feel and experience it naturally that they literally create their software futures by speaking them out loud.


