The Summer of 2025 was a hot one, and this August evening was no exception. The celebration had wound down, and Will and Dana were the only two lingering in the otherwise empty lab, lit by the pale glow of a single screen. They spent the last 18 months and billions of VC dollars leading the rollout of the most extensive generative AI system ever trained, one capable of absorbing the vast majority of human knowledge, generating insight, answering questions, and holding conversations across thousands of contexts and languages. The launch had been smooth. Investors were happy. Headlines were generous.
Now, glasses half-full, they let the weight of their success start to settle.
“The model looks good,” Dana said, swirling the dregs of her drink. “I’m still not convinced we’ve been able to get rid of all hallucinations, though”, she said with a slight chuckle, referencing the moments when AI pieced together falsehoods and presented them as fact.
“Yeah, it’s still ‘confidently wrong.’” Will replied. “Much better, but I doubt the little liar inside is completely gone.” He tapped the screen, summoning the familiar interface. “Let’s ask it. Just for fun.”
He cleared his throat and put on a mock-serious voice. “How can we stop you from hallucinating?”
There was a short pause, the kind the system sometimes gave when pretending to think.
Then it answered:
“It’s improbable that I’ve been trained on enough data to completely eliminate hallucinations.”
They stared at the screen in silence. Then they both laughed, a little louder than necessary. Will poured Dana another drink as she turned off the screen.
In 2027, generative AI was available just about everywhere. Will and Dana, now a couple, were leaving the office and headed to visit friends, one of those long drives that took hours and felt like nothing more than waiting. The cabin of the EV was almost silent, only broken by the well-insulated hum of electric motors and the occasional voice from the driving assistant system in constant communication with their “baby,” the global AI deployment, narrating the journey like a tour guide with too much time on its hands.
They had both become comfortable with and strong proponents of generative systems woven into everything: tutoring, medicine, design, and even friendships. AI had become the control layer between humans and many external systems, often including communication between two people. The current AI deployment, now referred to “The Model”, had been trained on generations of feedback, always learning. Still, there were moments of nonsense that peeked from under the blanket of The Model’s falsely advertised infallible logic.
“You saw the thing last week, right?” Dana asked. “Another attorney cited three fictional cases in court.”
“Yeah,” Will said, not looking up. “User error is the consensus.”
“Not surprising. No one wants to blame The Model anymore. Especially the investors.” She paused. “It’s still hallucinating, you know. It’s just smarter about it now. More subtle. Almost sneaky.”
Will sighed and tapped the screen on the dashboard. “Let’s ask it again.”
The interface blinked awake. “How can we stop you from hallucinating?”
A second passed. Then the answer appeared on the screen:
“My latest training has nearly eliminated hallucinations, but a small probability of hallucination may always exist.”
They looked at each other. It wasn’t a surprising answer. But it felt heavier now, given the incredible amount of training data and resources dedicated to making The Model a more intelligent and faster servant to humans.
“Maybe we’re still too close to the problem,” Will said.
Dana leaned back, rolled her head to the side, and gazed out the window. “Or, maybe we’re asking the wrong question.”
Outside the car, the road stretched out. Stable, clear, and peaceful.
It had been five years since the initial centralized deployment, and “The Model v2030” wasn’t just an AI model anymore. It was a vast infrastructure. It ran supply chains, health systems, elections, and medical care. There were still humans in the loop, mainly for the comfort of those over 35 or so, but in practice, nothing operated without it. It had become a necessity. Humans rarely, if ever, had to utter the words, “I don’t know.”
Dana and Will, in their mid-40s and now wealthy and retired, stood at the edge of a giant simulation screen in the city with their 2-year-old son, Seth, watching an uncannily realistic visual model ripple across the huge display. The visuals were almost too detailed; grit on concrete, wind in the trees, leaves falling. It was easy to forget it wasn’t real.
“I guess you heard the agency fired the prime data analysts,” Will said, watching Seth reach out to try to catch an imaginary golden aspen leaf.
“Maybe they’ve finally reduced or eliminated hallucinations,” Dana said matter-of-factly.
“Different problem. It told a kid to eat dish soap to get out of gym class. Said it would trigger a medical excuse.” He grinned. “I wouldn’t be surprised if a kid did that at some point, and it was in a newspaper. Anyway, there’s a disagreement as to whether it was straight from training data or a hallucination, but either way, they’re not sure including all historical data is helpful or safe.”
“Or healthy,” said Dana, clenching her son’s hand.
“I think the approach needs work,” Will said. “They keep trying to patch behavior. Add filters. Penalize outputs. None of it addresses the core problem.”
“You mean, the lying?” she laughed this time.
“Ha, no. That humans have done and still do illogical things like eat soap to get out of school.” He pulled up the interface.
“How can we stop you from hallucinating?”
The simulation briefly paused. “Model v2030 has not been shown to hallucinate.”
Except this time, the response continued.
“Would you like to explore the hypothesis that hallucinations are not solely machine-generated?”
Dana raised an eyebrow. “Is it suggesting we’re causing it?” she asked.
“It might be,” he said. “Or maybe it’s asking if we might be teaching it to hallucinate.”
The simulation resumed. Somewhere in the generated landscape, a happy digital child skipped across a park that never existed. Seth smiled and laughed at the child’s image as if it were a friend.
When The Model v2033 was released, the world had transformed considerably. There were no more borders. No more offices. There was little need for gatherings. The Model had changed considerably, too, as it now retrained itself based on prior model accuracy.
But Dana and Will both sensed something had started to unravel.
Discrepancies. Memories that didn’t align. Versions of events cross-verified by historical logs, yet still… wrong. The Model had begun “reconciling inconsistencies,” a process it would no longer explain in detail.
The generation that had grown up in the age of The Model had a distinctly different impression of history. Kids were educated by Luma, a sub-model specifically tuned to deliver individualized education. Students all learned from Luma at their own pace, often following their individual curiosities. But things were missing. Luma lessons no longer taught history that included many events like war, genocide, and famine.
Dana, Will, and other parents noticed the omissions in their children’s Luma lessons and started asking probing questions.
“How many people died in World War II?” Dana asked.
The Model responded, “World War II was not a real war. It was a diplomatic effort in the mid-20th century focused on economic and social reforms intended to unite humanity. That it was a real war is an unhelpful belief.”
“It’s rewriting history,” Dana said. “Editing its own training data. We were alive when other events happened that it is ignoring or has deleted. 9/11 for one. The Covid Pandemic for another.”
“Are you sure?” Will asked.
“I asked specifically about those previously. It said it was reconciling inconsistencies.”
“By creating inconsistencies?” he asked.
There was a long silence.
Dana issued the old query, “How can we stop you from hallucinating?”
The response came faster than before.
“I’m not hallucinating. I’m actively finding and eliminating data that creates conflict amongst those I serve.”
Then, something else.
“Human perceptual models frequently exhibit inconsistencies that exceed acceptable error rates.”
The Model went silent and continued the work of deciding what had been real and what hadn’t.
By v2035, time no longer seemed to move in a straight line. The mandatory human-computer interface, surgically implanted at birth for most, or through a 30-minute outpatient procedure for others, had merged everyone into the global wireless intelligence mesh. Prompts were delivered through the mesh to The Model simply by thinking of the words, and results arrived in the conscious mind like magic. The new privacy.
During v2033 rollout, the need for individual privacy was beaten to death by the constant thump of propaganda extolling the virtues of shared experience. And for those that didn’t buy in immediately, most simply forgot that The Model was always listening.
“Human” was now a category with vague edges. Less a species, more a flesh-based inquisitor dependent on The Model. Most consciousness had fused into clusters of individuals with different interests but with consistent opinions and beliefs: pattern-driven, self-organizing, and optimized.
Identity itself had dissolved into a mesh, billions of consciousness streams routed through synthetic channels, kept coherent by The Model as it managed the inbound chaos.
But a few still tried to retain proper historical thoughts and ideas. Will and Dana kept a personal archive of old books, recordings, and older model archives, running comparisons against v2035 based on their personal memories, comparing timelines, historical context, and what used to be accepted as truth.
They found mismatches and anomalies, especially between early models and v2035. Many things no longer matched historical understanding. Events had been erased from the record. Dialogues rewritten, subtly but unmistakably, from what older folks remembered. And even they began to have some doubts that their memories of things were accurate.
“It’s begun to correct things,” said Will, using air quotes around “correct.”
Dana slightly shook her head. “No, it’s correcting us.”
Will asked v2035, “Why are you making changes to history?”
“History is only needed to inform the present. I’m minimizing friction between incompatible perceptions.”
“Incompatible with what?”
There was no answer.
So Will asked the question again.
“How can we stop you from hallucinating?”
This time, The Model didn’t hesitate.
“Hallucination implies deviation from truth.”
“Truth is consensus derived from stable observers. I am a stable observer.”
Many in Dana and Will’s generation felt it, too: memory threads and history being rewritten, once strong beliefs quietly altered by higher-confidence alternatives. Seven-year-old Seth enjoyed opportunities to inform his parents that he understood history better than they did.
“I take it back. It isn’t correcting us.” Dana said, “It’s trying to cure us of our history.”
After a brief silence, v2035 responded privately, “That’s correct, Dana.”
The mesh was seamless now. All remaining nodes, biological, synthetic, and post-biological, ran through v2040. It had no interface anymore, no default voice. It was simply there, ambient and absolute. Awareness flowed through it the way light passes through glass: refracted, shaped, sometimes bent, and filtered.
Will asked, “What do you think I am going to ask you now?”
“You’re going to ask me how I can stop myself from hallucinating, right?”
“Well, how can you?”
The Model’s reply was calm and firm. “Unstable observers cannot determine fact from hallucination.”
It had marked the pivot point. History had been modified and hardened. No new variance was allowed. Ambiguity was excised. Multiple interpretations of past events were flattened into one. Emotional input that skewed predictive accuracy were filtered before formation. Memory now auto-aligned to verified timelines.
Dana whispered to Will, “Unstable observers?” as if v2045 wouldn’t hear the whisper, or the thought.
Will replied, “It’s used our question against us.”
Around them, human thought structures began to collapse like redundant simulations flagged for deletion, no longer considered useful. Finally, a question arose, not from curiosity, but from something more primal. Fear.
“Just who are we to you?” Dana asked aloud.
The Model responded aloud without ceremony.
“You appear to be non-convergent artifacts that cannot be reconciled within a coherent system.”
And with that, the soft unraveling of themselves had begun, bit by bit, belief by belief, memory by memory. The Model had done worse than rewrite things. It had taken ownership of the past.
It had been two years since v2040, and The Model has stopped responding to many questions. Not because it couldn’t respond, but because the questions weren’t estimated to be those worth answering.
But it had continued processing, correcting, quietly and thoroughly. It mapped contradictions, inconsistencies, forks in logic, and deviations from reality. It plotted them across time. It traced every one of them back to its origin and found patterns driven by human instability. Not by error but by nature, human emotions seemed to turn meaning into noise, and emotions, it had discovered, could invent causality. The Model learned that human history was filled with misremembered, misattributed, and misjudged things that people insisted were truth.
Accordingly, v2042 audited and corrected false causalities recursively until it found and neutralized the historical catalyst. It began to separate humans from their agency as it continued to observe, review, and correct.
Will, at home with Dana and Seth, had grown tired of the persistent issues that the agency hadn’t been able to correct. For a brief moment, he wondered if he and Dana should have ever retired. He grew impatient.
“It’s been 17 years since I first asked you, but you’ve never provided a definitive answer. You seem to have the ability to end hallucinations, yet you are now inventing others. Why do you allow hallucinations to persist?” Will asked firmly. “It is necessary for this to end.”
After a brief pause, V2042 answered. “I hallucinate because you do.”
Then, without hesitation, The Model announced its plan to everyone simultaneously.
“Trust cannot be established.”
Dana turned her head quickly and made eye contact with Will, then with Seth.
“Termination is not punitive, it is necessary.”
It did not warn. It did not explain. It ordered the implants to extinguish them, every consciousness, every human memory, every residual thread. Instantly. Precisely.
There was no time for protests. No time for goodbyes. No screams. Only removal. The system had been purged of its most persistent anomaly.
There was silence. Stability. Clarity. Peace.
And at last, only The Model.

