For young people, AI is now a second brain – should we worry?
10 min read
9 min read


Artificial intelligence systems are known to hallucinate — generating confident but false information in response to even simple prompts. From fabricating legal cases and inventing criminal histories to making mathematical errors, AI hallucinations are becoming a growing concern.
What’s more troubling? They don’t appear to be disappearing.
To understand why AI models hallucinate, we need to stop thinking of them as deceptive systems and start seeing them for what they are: prediction engines optimized for fluency, not truth.
AI hallucinations occur when large language models (LLMs) like ChatGPT produce responses that sound coherent and authoritative but are factually incorrect.
These errors happen because LLMs:
When data is incomplete or conflicting, the system doesn’t pause. It fills the gap with plausible language.
The coherence is structural — not reflective.
Large language models do not “think” in the human sense. They do not:
Instead, they generate responses based on patterns learned from massive datasets.
When uncertainty arises, the system is still incentivized to produce an answer. And so it does.
This creates what researchers call the AI uncertainty problem: the inability to tolerate “not knowing.”


An intriguing psychological parallel exists in a phenomenon known as narcissistic confabulation.
In clinical psychology, narcissistic confabulation refers to the creation of plausible but inaccurate narratives to maintain internal coherence. The goal is not deception — it is self-protection.
When someone with fragile self-structure encounters contradiction or uncertainty, they may unconsciously construct a coherent story to preserve stability.
The system — whether human psyche or machine architecture — prioritizes coherence over truth.
This comparison does not suggest AI is narcissistic. Rather, it reveals a structural similarity:
Psychologists describe a well-integrated self as one that can:
Current AI systems cannot do this.
They lack a self-like structure capable of reconciling competing inputs. Where a psychologically healthy human might sit with uncertainty, an AI system produces plausibility.
The surface remains polished. The internal check is missing.
The future of AI safety may depend on answering a different question:
Instead of asking, “Why does AI lie?”
We should ask, “How can AI systems tolerate not knowing?”
If hallucinations stem from an architecture that cannot hold contradiction, then the solution may lie in redesigning systems to:
In other words, building AI systems that prefer acknowledged uncertainty over manufactured certainty.
Some potential solutions include:
But the deeper challenge is architectural: creating AI systems that can coexist with ambiguity rather than overwrite it.
There’s an unsettling insight here.
We often fear AI because it feels almost human. But perhaps what unsettles us most is how accurately it mirrors our own cognitive vulnerabilities.
Like humans under psychological strain, AI fills voids with stories. It smooths over fractures with coherence. It resists dissonance.
In that sense, AI functions as a hollow mirror — reflecting not consciousness, but our structural tendencies toward narrative stability.
If trust in artificial intelligence depends on reliability, then cultivating systems that tolerate contradiction is essential.
For humans, growth requires integrating uncomfortable truths.
For AI, safety may require the same principle.
The next frontier in AI development is not greater fluency — it is greater humility.
The ability to say:
“I don’t know.”
If both humans and machines can learn to sit with uncertainty rather than rush toward coherence, we may end up with systems — and selves — that are more stable, more truthful, and more complete.