Why AI’s hallucinations are like the illusions of narcissism
9 min read
8 min read


I bet you’re hearing about AI, especially the chatbots, from every corner nowadays: the news, social media, colleagues at work, perhaps even from your grandmother. All of us are. In fact, some might even have developed an allergic reaction to the AI hype, overpromising and doom-mongering. One day it’s ‘AGI [AI as smart as humans] is here,’ the next it’s ‘AI will take your job,’ and the day after that it’s ‘AI is better than your therapist.’
I’m not here as an AI enthusiast, but as a guide to help you navigate this new world. According to surveys, 78 per cent of organisations, 81 per cent of researchers, 86 per cent of students, and nearly two-thirds of physicians now use AI in some way. Whether we like it or not, chatbots are here to stay. It’s not necessarily a problem, but it risks becoming one if people use chatbots in harmful ways. I’m going to help you avoid that.
Early findings suggest that excessive and thoughtless engagement with chatbots can lead to deleterious cognitive effects. For example, research led by the Wharton School at the University of Pennsylvania showed that, while a chatbot improved students’ mathematics performance, the benefit was like a crutch – when the AI was taken away, the students’ performance was worse than a control group. In another study, Michael Gerlich at the SBS Swiss Business School found that the more students used chatbots, the more their critical thinking abilities suffered. Besides that, a recent brain-imaging study by researchers at MIT revealed that students who wrote an essay with a chatbot couldn’t remember the contents minutes later, and their brain activity was lower and less coherent than in the group without a chatbot.
Whether your preferred AI is ChatGPT, Gemini, Claude or Grok, you might be concerned about these kinds of harms to your own critical thinking and creativity. To help you avoid this, I’m going to share recommendations for using AI wisely. I’ll focus on intellectual and project work, such as writing, research and idea development, rather than emotional support, life advice or coding.
The good news is, it’s not the technology itself that risks making us more stupid, but the way we use it. To protect yourself and potentially gain benefits, you simply need to put a little more effort into designing smarter interactions between your mind and chatbots.
Without reflection, it’s easier to fall into using chatbots automatically or thoughtlessly, which can undermine your personal growth in the long run. Before we get into specific ways to use AI, I recommend you take a step back and ask yourself: what matters for my personal and professional development in the next three, to five, to 10 years? Write down your key objectives, describe your ideal self, or simply the skills you want to cultivate. This will give you a clear reference point against which you can judge your AI use. Here’s an example:
Goal: In five years, I want to become a business consultant.
Abilities needed: Generating creative solutions, decision-making for complex scenarios, critical evaluation of trade-offs, persuasive presentation, flexibility across contexts, etc.
Having reflected in this way, before you begin any work task, you can better judge whether using a chatbot will help or hinder your longer-term aims. For example, if you got into the habit of using a chatbot to design creative strategies from scratch, it could harm your acquisition of skills in strategic decision-making. On reflection you might decide it makes sense to reserve chatbots only for routine or monotonous tasks that don’t directly impact your professional development.
To streamline this process, you could draw a decision tree and keep it near your workspace. I’ve shared my own decision tree below (I explain some of the terms in it such as ‘directive mode’ later in this Guide). Feel free to adapt my tree for your own use.

Now let’s drill down to specific ways to use AI wisely. If you follow only one rule, make it this: for any task where thinking matters, always try on your own first, and only then use chatbots. You can think of this strategy as a sandwich:
This strategy not only preserves your authenticity but also helps you learn. When you struggle first, you build your own understanding of the problem and how to approach it. Using a chatbot afterward allows you to refine and expand that understanding. In contrast, relying on ready-made solutions leaves the AI’s ideas disconnected from your thinking, making them harder to apply later.
When receiving an answer from chatbots, always remain sceptical and question the incoming information. This is important because AI is prone to ‘hallucinations’ (generating false information with high confidence). Here’s how to minimise hallucinations and deal with them:
Don’t settle for predictable prompts such as summarise this text or rewrite my paragraph. Almost everyone uses such conventional prompts, which explains why the outputs are often generic. Instead, train your creativity and treat chatbots like a playground for your imagination. Here are a few concrete ways that I’ve used this approach myself:
Note that, in general, more advanced models (as of September 2025, those are GPT-5, Grok 4, Gemini 2.5 pro, Claude Sonnet 4.5) tend to be more creative because they capture a wider range of information and can bridge greater distances between concepts.
It’s easy to get lazy and lapse into accepting too much input from the AI. To retain agency and authorship over your work, use this tracking exercise:
I already mentioned a few prompts, but as these are so key to how you interreact with AI, let’s dig deeper. Systems like ChatGPT, Claude, Grok or Gemini are created to be agreeable and pleasant, not to help you grow – unless you prompt them appropriately. Two distinct strategies are effective: prompting chatbots to provide ‘directive’ or ‘non-directive’ guidance. In the former mode, it keeps a close distance and directs your thinking more, while in the latter, it keeps aside and tries to avoid directing you too much. Let’s go through both and see when it makes sense to use each.
Use this mode whenever you already have a tangible ‘product’ such as an article draft or a developed idea. Essentially, here you use a chatbot as a kind of work supervisor to provide feedback, evaluate your arguments or ideas, identify and critique the weak spots, etc. Since my work requires producing a lot of written content, I sometimes ask chatbots to critically evaluate my drafts. The standard prompt I use looks close to this:
Act as a critical reviewer. Evaluate the clarity of my argument, the logic of my structure, and the persuasiveness of my evidence. Point out weaknesses or gaps, and suggest ways to improve the flow and coherence. Do not rewrite the text yourself. Focus only on critical feedback.
Use this mode when you want to minimise the chatbot’s influence – you want it to act less like a supervisor and more like a tutor bringing out the best in you. The key difference from the directive mode is that you don’t want to receive direct instructions on what to fix, but instead you want the AI to point vaguely to areas that might potentially warrant your attention. The rest of the work, such as identifying a concrete issue and fixing it, is your own. For example, for my creative writing tasks, I often use a prompt as follows:
Never tell me directly what to fix and avoid strongly imposing your own vision. Instead, point neutrally and vaguely to the areas that might need more consideration. For example, raise potential ambiguities, confusing phrasing, or ideas that could be strengthened. If you see a concrete issue, don’t write ‘Argument X is weak, you need to add Y.’ Instead, write something like ‘Are there any weak spots in argument X? What points could be criticised by a sceptic?’ Or if specific sentences are unclear, don’t point that out directly and don’t rewrite them. Instead, write something like ‘Some sentences in the third paragraph might not be the best.’
The non-directive mode is especially useful when you have only a half-baked idea or an intuition that needs to be articulated more explicitly. You want the AI to trigger your own reflection but never define what the exact output of this reflection should be. To give a real example, I had the idea to write this Guide for a long time, but it was mixed and tangled in my head. Before starting, I prompted a chatbot along these lines:
I have an idea for an article on the topic X. However, I don’t yet have a clearly defined message and an article structure. Act as a Socratic sparring partner: ask me questions that challenge my assumptions, clarify my goals, and make me consider aspects I haven’t articulated yet. Do not provide any answers, opinions or suggestions. Stay as neutral as possible. Your role is only to provoke my thinking and help me understand myself better.
Based on that prompt, it asked me questions like: ‘What’s the real message you want to convey?’, ‘What do you mean that we need to avoid being influenced too much?’ and ‘In what specific ways can chatbots help develop critical thinking?’
The key is that it helped me uncover my own implicit thoughts and understand how I want this piece to look, not something imposed by the chatbot.