Friday, March 6, 2026

News and Ideas Worth Sharing

HomeBusinessTech in the 413TECH & INNOVATION:...

TECH & INNOVATION: AI, uncertainty, and the myth of prediction

Artificial intelligence and large language models appear to accelerate everything, creating the illusion that prediction has replaced preparation. That illusion is dangerous because flexibility, redundancy, and learning still matter.

Editor’s note: Besides tracking technological advancements and innovations, our author is a Juilliard-trained musical composer. Listen to “The Myth of Prediction,” an original improvisation mapping ten octaves into 88 keys by Howard Lieberman, composed for this column.

This is Part Four of the series we have been running since the first of the year on America’s time horizon problem. Click here for Part One on how America trained its competitors, here for Part Two on how short-termism is an executive disease, and here for Part Three on what happens when politics overrides engineering.

When speed creates the illusion of control

Every era has a technology that tempts leaders to believe the future has become predictable. Today, that technology is artificial intelligence, and particularly large language models. They appear to compress time. They respond instantly. They generate confident answers. They create the impression that uncertainty itself has been tamed. This is a dangerous illusion. Speed is not the same thing as understanding. Prediction is not the same thing as preparedness. And information, no matter how abundant, does not eliminate the need for judgment, humility, or long-term thinking.

AI systems are powerful tools. They are extraordinary amplifiers of human capability. But they are not oracles. They do not remove uncertainty. They merely shift where it shows up. Leaders who mistake acceleration for foresight repeat the same error that has already hollowed out American competitiveness. They confuse responsiveness with resilience.

Speed is not the same thing as understanding. Prediction is not the same thing as preparedness. Howard Lieberman created this image with AI assistance.

Why prediction has always been the wrong goal 

When reality hits, and it always does, being flexible is the only way to survive. Howard Lieberman created this image with AI assistance.

The deeper problem is not AI itself, but the longstanding belief that prediction is the highest form of intelligence. Engineers know better. Complex systems do not become predictable as they grow more interconnected. They become more sensitive, more nonlinear, and more prone to surprise.

The history of engineering is not a history of accurate forecasting. It is a history of building systems that can absorb failure, adapt under stress, and recover when assumptions break. Redundancy, margin, and optionality are not inefficiencies. They are survival strategies.

AI does not change this. In some ways, it intensifies it. As decision cycles shorten, the cost of being wrong increases. As outputs become more persuasive, the temptation to skip verification grows. The danger is not that AI will make decisions for us. The danger is that it will make us overconfident in decisions we barely understand. The question is not whether we can predict the future. We cannot. We never could. The question is whether we can build systems flexible enough to respond when the future diverges from our expectations.

This is where time horizons matter most. Organizations optimized for short-term efficiency struggle with flexibility. They remove slack. They eliminate redundancy. They treat backup plans as waste. Everything is tuned for a narrow set of assumptions. When those assumptions fail, the system has nowhere to go.

By contrast, organizations that expect uncertainty invest differently. They value generalists alongside specialists. They maintain overlapping capabilities. They preserve institutional memory. They design for recovery, not perfection. These choices often look inefficient in the short term. They are decisive in the long term.

AI does not remove the need for this mindset. It rewards it. The organizations that benefit most from AI will not be the ones that automate fastest, but the ones that can adapt fastest when automation produces unexpected consequences.

Why backup plans do not show a lack of confidence

One of the quiet cultural failures of recent decades is the belief that having a backup plan signals weakness. Confidence became conflated with certainty. Leaders were expected to project inevitability. Doubt was treated as incompetence.

Engineers know this is backward.

Backup plans are not pessimism. They are respect for reality. They acknowledge that systems fail, that humans err, and that complexity defeats even the best models. In aviation, medicine, and safety-critical engineering, redundancy is nonnegotiable. No one calls it lack of confidence. They call it professionalism.

In economic and political systems, we forgot this lesson. We optimized away alternatives. We bet on single paths. We trusted stories over structures. AI now threatens to accelerate this mistake by making confident answers cheap and plentiful.

The future will punish that error.

Why slow and steady still wins the race

China’s advantage has never been prediction. It has been patience. Long-term planning does not require knowing exactly what will happen. It requires committing to capacity building, learning, and continuity even when outcomes are uncertain.

America once excelled at this. We built institutions that outlasted individuals. We invested in people, teams, and infrastructure that accumulated capability over decades. That is how we won previous technological eras.

AI does not change the rules of that game. It raises the stakes.

Nations and organizations that treat AI as a shortcut will amplify their existing weaknesses. Those who treat it as a tool within a resilient, learning-oriented system will gain leverage. The difference will not be visible immediately. It never is. It will appear later, suddenly, when one side can adapt and the other cannot.

It takes time to find the right horizon. Howard Lieberman created this image with AI assistance.

Choosing the right horizon

This series has argued one simple point from multiple angles. Time horizons shape outcomes. Short-term optimization produces fragility. Long-term learning produces strength. This is true in manufacturing, in executive culture, in politics, and now in AI.

We do not need better prediction. We need better preparation. We do not need leaders who promise certainty. We need leaders who build capacity. We do not need systems that look efficient. We need systems that can endure.

The future will not belong to those who guessed right. It will belong to those who remained flexible, preserved learning, and respected reality long enough to adapt. That choice is still available. But it is a choice about time.

spot_img

The Edge Is Free To Read.

But Not To Produce.

Continue reading

TECH & INNOVATION: Ruach—Organized responsiveness in a chaotic world

Ruach is a Hebrew word for the state of organized readiness that lets people stay coherent and collaborative when the systems around them stop providing the continuity innovation requires.

TECH & INNOVATION: Freedom without continuity becomes noise

We have built an economy of options and a culture of pivots. What we have not built is the patience for anything that takes long enough to matter.

TECH & INNOVATION: Why innovation needs continuity

People think innovation is about breaking things. It distinctly is not. Continuity is required to make anything stick long enough to be adopted.

The Edge Is Free To Read.

But Not To Produce.