Monday, March 23, 2026

News and Ideas Worth Sharing

HomeBusinessThe Human Voice

The Human Voice

We are entering a moment in which the boundary between knowledge and the appearance of knowledge is becoming harder to hear… To use these [artificial intelligence] systems well is not to grant them authority, but to place them in their proper context. They are tools, not arbiters.

Editor’s note: This column is the third of a three-part perspective on AI by Michael Saltz, multi-award winning Senior Producer for the PBS NewsHour, now retired. This series of articles was originally published by Saltz on his Substack platform. Click on the titles to read for first two, “Inside the Outside”  and  “Inside the Outside—Coda.

When I was producing the essay segment for what is now the PBS News Hour, part of my job was to find writers who embodied what the program stood for journalistically. Over time — twenty‑five years — fifty‑seven writers ended up on the air, but that number reflects a deeper process of selection. Early on I discovered that people who wrote op‑eds or essays for newspapers and magazines tended to fall into two broad camps. There were the ideologues — people who wanted to be players in politics, whose writing was less about observing the world than influencing it. And there were those whose roots were in reporting, in the ethics of journalism: observers of fact who let the world instruct them. They weren’t unbiased — no one is — but they were willing to look at facts that contradicted their assumptions and adjust their views accordingly. They were learning rather than telling. Inevitably, I rejected the ideologues, because they simply didn’t belong on the NewsHour.

My NewsHour experience taught me something essential about how people sound when they’re learning — and how different that is from the sound of certainty. That distinction, between learning and telling, is the essence of journalism and its ethics. And it lies at the core of what I’m only now beginning to understand about AI — and just how truly artificial it is beneath the surface.

That distinction came back to me when I read a February 13, 2026, story in The Washington Post about how ChatGPT describes different states and cities. The researchers had asked the system to characterize places across the country, and what emerged were confident, fluent summaries built on patterns the model had absorbed — patterns that often reflected stereotypes rather than facts. The system wasn’t reporting; it was aggregating. But it spoke in the voice of someone who had done the reporting.

What was unsettling about the Post article — and about the current wave of artificial intelligence — is not that it gets things wrong; people get things wrong all the time. It is that it gets things wrong with the confident assertion of someone who sounds as if they’ve done the reporting. Unlike a human being, AI has no awareness of error. It cannot feel the friction of being contradicted by the world, cannot sense the gap between what it believes and what is true. In that way, it resembles the ideological essayist: moving confidently from premise to conclusion without ever questioning the premise itself. The difference is that a human ideologue, somewhere deep down, may still feel the tremor of doubt. A machine cannot feel even the slightest tremor. It simply continues the pattern it has learned, unaware that the pattern may be distorted.

The COMPAS criminal‑risk algorithm, used in American courts, didn’t simply mislabel Black defendants as high risk; it did so because it relied on historical arrest data as a proxy for criminal behavior. In effect, it treated the past behavior of the system as evidence about the person standing before it. Decades of unequal policing were baked into a score that appeared neutral and objective. The algorithm wasn’t “biased” in the emotional sense — it was biased in the structural sense, because the data it learned from carried the imprint of the world that produced it.

A healthcare algorithm used for millions of patients made a different but equally revealing mistake. A 2019 Science study found that the system used healthcare spending as a stand‑in for illness. Because Black patients historically spend less — due to unequal access, structural barriers, and different patterns of care — the algorithm concluded they were healthier and assigned them lower risk scores. The flaw wasn’t in the math; it was in the assumption that spending reflects need. The system reproduced a longstanding inequity with the confidence of a neutral observer.

Amazon’s hiring tool failed for yet another reason. It learned from ten years of résumés submitted to a male‑dominated tech workforce. Seeing mostly men in the training data, it inferred that “male” was the pattern of success and quietly downgraded résumés that included the word “women’s.” No one programmed it to discriminate. It simply absorbed the world as it was — a world in which men had been — and continued to be — hired more often — and extended that pattern as if it were a fact about merit.

What ties these examples together is not malice or intention, but the quiet authority of a system that has never encountered the world it describes. It does not know what it does not know. It cannot feel the resistance of reality pushing back against a mistaken assumption. It cannot experience the moment when a fact forces a change of mind. It only extends the pattern it has been given, and because the pattern comes wrapped in fluency, we mistake fluency for understanding.

The danger is not only that these systems can mislead us, but that they can do so without any awareness of what they are doing. A human being who pushes a narrative knows, at some level, that they are pushing it. A machine does not. It has no interior life, no sense of motive, no sense of consequence — no awareness that its errors fall on human beings, not on itself. Yet the effect can be similar: a confident assertion delivered with the authority of someone who sounds as if they have done the work. The difference is that with a machine, there is no one to hold accountable, no mind to interrogate, no intention to uncover. There is only the pattern, repeating itself.

The challenge, then, is not simply to correct the errors these systems make, but to recognize the authority we grant to a voice that sounds as if it has done the work. We are accustomed to trusting fluency, to hearing confidence as a sign of competence. But fluency is cheap for a machine. It is not the product of experience, or doubt, or the slow accumulation of understanding. It is the by‑product of scale. And when scale produces the sound of knowledge without the substance of it, we are left with a world in which the appearance of understanding can outpace the thing itself.

We are entering a moment in which the boundary between knowledge and the appearance of knowledge is becoming harder to hear. The systems we are building can generate the sound of understanding at a scale no human being could match, and they can do it without ever encountering the world that gives understanding its shape. That is not their fault. It is simply their nature. But it means the burden shifts to us: to remember what learning actually feels like, to recognize the difference between a mind that has been changed — that can be changed — by experience and a pattern that has been extended by computation.

When we speak, we draw on a lifetime of experience — on memory, on doubt, on the felt sense that something is right or wrong, coherent or incoherent, honest or evasive. We speak from a history that has shaped us, from mistakes that have taught us, from questions that have unsettled us. A machine has none of this. It has no past to remember, no future to imagine, no inner thread connecting one moment of awareness to the next. It extends patterns; it does not inhabit them.

But the human voice matters: it carries the trace of a life behind it. When we speak, we reveal not just what we think but how we came to think it — the doubts we wrestled with, the experiences that shaped us, the mistakes that taught us something we did not know. A machine has no such history. It offers conclusions without context, confidence without experience.

And this is why we cannot outsource our judgment to a system that has none. A machine can extend a pattern, but it cannot question it. It can generate an answer, but it cannot ask whether the answer makes sense. It can sound authoritative, but it cannot tell when its own authority is misplaced. Only we can supply the doubt, the hesitation, the awareness that something might be off. The responsibility is ours because the capacity is solely ours.

This does not mean we should reject these systems or fear them. It means we should understand what they are and what they are not. They can help us see patterns we might have missed, surface connections we might not have noticed, offer possibilities we had not considered. But they cannot tell us which of those possibilities is true, or which of those patterns is meaningful, or what “true” would even mean to a system with no capacity for understanding. They cannot tell us what matters.

To use these systems well is not to grant them authority, but to place them in their proper context. They are tools, not arbiters. They can widen our field of view, but they cannot tell us where to look. They can offer possibilities, but they cannot tell us which ones deserve our trust. They can produce something that sounds like an answer, but they cannot stand behind it. That requires a mind that knows what it means to stand behind anything at all.

What these systems can offer us, at their best, is a kind of provocation — a way of shaking loose ideas we might not have reached on our own. But the meaning of those ideas, the weight they carry, the truth they point toward or away from, is something only we can determine. A machine can generate a sentence, but it cannot inhabit it. It cannot feel the cost of being wrong or the responsibility of being right. It cannot care. And caring, in the end, is what makes knowledge more than pattern.

In the end, these systems will reflect whatever we bring to them. They will mirror our questions, our assumptions, our blind spots, our hopes. They will extend our patterns, but they cannot choose among them. That choosing is still ours. The meaning is ours. The judgment is ours. The responsibility is ours. And if we forget that — if we mistake fluency for understanding, or pattern for truth — the failure will not be the machine’s. It will be ours.

 

spot_img

The Edge Is Free To Read.

But Not To Produce.

Continue reading

BUSINESS MONDAY: Spotlight on Sarah’s Cheesecakes & Cafe in Pittsfield

The small family-owned and -operated operation is celebrating its 20th anniversary—and looking forward to many more years to come.

CAPITAL IDEAS: What if oil hits $150?

If the market comes to believe $150 oil is not a spike but a plateau (at least a temporary one), I expect the S&P 500 to trade less like a traditional geopolitical scare and more like a classic recession scare.

BUSINESS MONDAY: Spotlight on Lanesborough Local Country Store

The goal of the woman-owned business is "to be a destination for people who want to find products made in the Berkshires."

The Edge Is Free To Read.

But Not To Produce.