CLOUDY Podcast | #32 AI Hallucinations: Why Does Artificial Intelligence Make Things Up and How Should We Use It Properly?
- News
Why does AI prefer to “lie” instead of admitting it doesn’t know the answer?
This issue is related to the way models are trained. Since they learn based on reward mechanisms, they are motivated to generate useful answers, but they receive no positive feedback or reward for admitting ignorance. As a result, the model is indirectly pushed to guess. It is similar to a student taking a test - if there are no penalties for a wrong answer, they are more likely to guess rather than leave the question unanswered.
What factors most influence the generation of inaccurate answers?
One of the key factors is a lack of context in the prompt. If a question is too general, the model may confuse the domains the topic relates to. Another reason is the time limitation of the training data. If a model was trained on data ending in 2023, it will not know anything about events or movies from 2025 and may start inferring based on previous patterns. Data quality also plays an important role - if a model draws from the internet, where conflicting information exists (for example, about whether the Earth is flat), it may generate an incorrect conclusion based on the dominance of certain types of content in its training set.
In which areas do hallucinations pose the greatest risk?
Critically dangerous sectors include healthcare and finance. In healthcare, AI should only be used as a supportive research tool, and people should never diagnose themselves based solely on its output, as the risk of incorrect information is high. Similarly, in finance, a model may recommend an investment or banking product based on outdated information, which can lead to financial loss. In these cases, consulting a professional and verifying up-to-date information is essential.
How can we communicate effectively with AI to minimize errors?
The foundation is so-called context engineering (prompt engineering). Instead of asking a simple one-sentence question, it is necessary to clearly define the situation and context for the model. It is also effective to instruct the model in which cases it should refrain from answering. Research suggests that being stricter with the model and demanding accuracy can improve results. On the other hand, politeness such as “please” and “thank you” may sometimes only consume tokens (computational resources) without significantly improving factual quality.
Is it possible to force a model to verify its information retrospectively?
Yes, modern models have access to tools for browsing the web and databases. If you have doubts about an answer, you can ask the model to check official sources or specify the exact websites it used. It is important to actually open these sources and verify whether the model’s response truly corresponds with the cited text. It is also advisable to define specific sources from which the model should exclusively draw information, thereby eliminating the influence of unreliable websites.
Will hallucinations be completely eliminated in the future?
Although the technology is evolving rapidly and newer, often paid models hallucinate significantly less, this phenomenon will likely not disappear entirely. Hallucinations are linked to the very architecture of current large language models, which operate on the principle of statistically predicting the next word. Completely eliminating them would require developing an entirely different technological architecture. Until then, the best protection remains critical thinking, common sense, and thorough fact-checking.
An expert from Aliter Technologies recommends:
• Verify sources: Use web browsing tools and ask the model for specific sources and websites it relied on.
• Work with context: The more information about your situation you provide, the more accurate the answer will be.
• Prompt carefully: You can explicitly state in your prompt that the model should refrain from answering in case of uncertainty or check official sources.
You can listen to the full podcast on 👉 Spotify,👉 Apple podcastoch or watch it on 👉 YouTube.