Using the Unknown as a Halt - Happening Again
Calling it caution doesn't make it a neutral stance, it's boring and simply virtue signalling
The other day, I came across a LinkedIn post that asked a series of careful questions about AI’s economic future.
Examples: What if AI coding becomes so expensive that hiring humans is cheaper again? What if companies build products on unit economics that break when subsidised plans end? What if employees become so AI-dependent that their cognition atrophies? What if rising costs create a closed talent pool — only those already employed can afford to learn the technology?
The post ended: “No doomer. Just trying to understand risks better.”
I replied in a comment that the answers are unknowable in the formative years of any transformative technology — and that using unknowability as a reason not to act is itself a position. It doesn’t feel like a position. It feels like really smart. But it isn’t neutral, and it isn’t as safe as it assumes.
Let me break it down a bit.
Jevons’ Paradox and the cost fear
As a lot of people know, in 1865, the economist William Stanley Jevons observed something quite counterintuitive about the steam engine. As steam engines became through wider use and accessibility became more efficient, and thus requiring less coal to produce the same output, total coal consumption went up, not down.
That’s the paradoxical law: Efficiency makes a resource economically viable for more use cases. More use cases means more usage. More usage means more infrastructure investment. More infrastructure investment drives further efficiency and cost normalisation.
Jevons called this the rebound effect. We now call it Jevons’ Paradox, and it has held across nearly every resource-technology pairing since. (One discussion, of course currently is: Is AI breaking the matrix, is it the black swan, the ultimate discontinuity, the end game of all innovation that ends this law, e.g. because work becomes meaningless and thus only and finally we don’t have a clue at al what’s going on. As interesting as that may be: well - if we all die, we will all die, right? So that extreme is not that interesting to watch and think, ironically.)
Applied to AI: the fear that “AI coding will become too expensive” assumes a static economy. If AI coding becomes more efficient — which is the direction of every technology after adoption begins — it becomes viable for more use cases, which drives usage growth, which drives infrastructure investment, which normalises costs over time. This happened with computing, with cloud storage, with internet bandwidth. The cost concern gets the direction of travel backwards.
And if AI stays expensive? Then only the highest-value use cases survive, which is a market working as intended. That’s not a catastrophe. That’s selection.
Ultimately, beyond any of that detailed reasoning, take my prediction: Even if OpenAI and Anthropic will economically die, the tech will stay. So better get used to that.
The historical cascade
Let’s ask the same questions about other transformative technologies.
The printing press. Gutenberg, 1450. Did it have exclusively positive effects? No. It enabled mass propaganda, accelerated religious conflict, destabilised institutions that had held power for centuries. The Reformation was partly a printing press story. So were some of the bloodiest wars in European history. Was it society’s task to manage the consequences? Yes. Did we do it well or quickly? Libraries took centuries to build. Copyright law came 250 years later. Censorship battles are still ongoing. Did the negative effects and the incomplete management stop economic engagement? No. Books, newspapers, mass literacy, knowledge spread — unstoppable from the moment it started.
Electricity. The Current Wars between Edison and Tesla were a literal industrial battle over standards, involving deliberate electrocutions of animals in public demonstrations to discredit a competitor’s system. There were fires, deaths, monopoly battles, and decades of pollution from coal-powered generation that still hasn’t been fully resolved. Did society develop building codes, safety standards, grid regulation? Yes, imperfectly, over decades. Did it stop? No. Electricity reached into everything.
Cars and traffic. Approximately 1.35 million people die in road accidents every year globally. Urban air pollution from vehicle emissions remains a public health crisis. Entire city architectures were redesigned around the car at enormous social cost — displacement, inequality, lost public space. Seatbelts weren’t mandatory in most countries until the 1970s. Emissions standards came even later. The regulation has been partial and perpetually behind. Did that stop adoption? More cars exist today than at any point in history.
The pattern is consistent. Were the effects exclusively positive? Never. Is it society’s task to manage the downsides? Yes. Do we always manage it well? No. Does that stop economic engagement? No. Must the economy continue engaging? Yes. Can innovation be contained? No.
The problem with using the unknown as a brake
This is the flaw in “No doomer, just understanding risks.” (And no pun to the person written that LinkedIn post, we exchanged nice DMs afterwards.)
Using unknowability as a reason not to act looks like caution and insanely smart. It is a position — a bet that the unknown will stay unknown, that costs will stay high, that the higher-order value container won’t emerge. That is the speculative claim. Smart framing doesn’t change what it is.
But it’s worse than just being a losing bet. It’s self-defeating.
Consider the specific risks the LinkedIn post raised. Cognitive atrophy — employees who can’t function without AI. The protection against this isn’t avoidance. It’s learning how to work with AI without losing your own judgment. That knowledge is only available from inside the practice. You cannot develop it from outside.
Or the risk of a closed talent pool — only those already employed with AI access can develop competence. Waiting makes this worse, not better. The gap between those engaging and those abstaining compounds over time. Every month of “let me wait until I understand the risks better” is a month of compounding disadvantage.
Or the unit economics risk — products built on AI that become unviable if costs rise. The companies that will navigate this best are the ones deeply familiar with the technology, building the flexibility and optionality to respond. That knowledge isn’t available from the outside.
In every case: the risk you’re trying to avoid by waiting is made worse by the waiting. The protection against AI-related risk is not less AI experience. It’s more.
In more simple terms: even if you are “against” AI and want to avoid the risks, you better lean in t learn the etch. Else you have no chance to harness it responsibly.
What the historical examples actually teach us
Gutenberg didn’t give societies the option of not adopting the printing press. The societies that mattered — that shaped what came next — were the ones that engaged with it, understood it, built institutions around it, learned to manage it.
The same with electricity, with cars, with the internet.
The negative effects were real, obvious and simply can not be negated. They also can not be avoided fully. The societal management was and will always be imperfect. None of that is in dispute. But the question was never whether. It was always how. And “harm can be done” is used as a useless, helpless brake, simply signalling an assumed higher, better morale. Virtue signalling. No bit will touch my code. Have fun with that position in the real economy, outside of a tiny niche. Think vinyl: great for enthusiasts, but not a huge thing.
The “how”question is the only question available with AI.
Not: should we? That’s a done deal, like it or not. And it’s not a decision any individual or company makes, but it’s done by the same dynamics that made the printing press, electricity, and cars inevitable.
The question is how. How do you engage intelligently? How do you build knowledge that lets you influence the how-society-manages-this conversation, rather than watch it from outside? How do you size the bet appropriately as part of your life — staying in without betting the house?
Treating the unknown as a brake doesn’t give you safety. It gives you less information, less capability, and less influence over exactly the outcomes you’re worried about.
A defensible position
Treat it as a bet. It’s not recklessness — it’s accuracy. You are placing a bet either way. The question is whether you’re placing it consciously.
Size the bet relative to your capacity. You don’t bet everything on year one of any infrastructure wave. You stay in long enough to find what becomes possible — browsers didn’t exist before the internet; SaaS didn’t exist before cloud; the higher-order AI container hasn’t been found yet.
Build knowledge, not avoidance. The knowledge of how to work with this technology, how it fails, where it adds real value, where it creates new dependencies — that’s the only real protection against the risks being raised.
The Gutenberg option — opting out of the printing press — was never available to the societies that created what followed and build upon it (e.g. widespread education). The electricity option was never available. The car option was never available.
The AI option isn’t available either. The only question is how you engage with what’s already in motion.


