Another day, another interesting problem with AI.
What we call "artificial intelligence" oftentimes proves itself to be completely lacking in intelligence. The longer it's used, the more the cracks make themselves more apparent, and worse, it has a detrimental effect on humans. That's not to say that AI can't be a useful tool, but like any tool, its limitations and uses should be understood more for it to be used properly.
According to the New York Post, models are showing signs of being unable to distinguish fact from fiction, but delivering everything as fact:
“Most models lack a robust understanding of the factive nature of knowledge — that knowledge inherently requires truth,” read the study, which was conducted by researchers at Stanford University.
They found this has worrying ramifications given the tech’s increased omnipresence in sectors from law to medicine, where the ability to differentiate “fact from fiction, becomes imperative,” per the paper.
“Failure to make such distinctions can mislead diagnoses, distort judicial judgments and amplify misinformation,” the researchers noted.
The fascinating results of a study found that even with newer models, the inability to distinguish false beliefs from true beliefs occurs almost 10 percent of the time:
Models released during or after May 2024 (including GPT-4o) scored between 91.1 and 91.5 percent accuracy when it came to identifying true or false facts, compared to between 84.8 percent and 71.5 for their older counterparts.
From this, the authors determined that the bots struggled to grasp the nature of knowledge. They relied on “inconsistent reasoning strategies, suggesting superficial pattern matching rather than robust epistemic (relating to knowledge or knowing) understanding,” the paper said.
On the surface, this sounds somewhat mundane, but more and more, people are relying on AI to answer questions about deeply personal things, including health-related questions. This includes physical and mental health.
This could, of course, lead people down some very dangerous paths as AI presents itself as completely confident and authoritative, even in its own ignorance.
People need to remember that AI isn't "intelligent" in the way we define intelligence. As I've repeated again and again, the intelligence behind the program is virtual. It only looks like intelligence. What you're really talking to is pattern recognition software that places one word in front of another after quickly calculating the best option. That means, when it reaches the borders of its experience, it will fill in the blank with what it believes is the next best thing according to the patterns it operates by.
While it talks authoritatively, eloquently, and confidently, you shouldn't see it as an authority at all. In fact, to be safe, you should probably view it in the same way you view a toddler explaining something. They, too, are confident and come off authoritative, but they don't really understand what they're discussing.
AI is like that. The word processor you're talking to doesn't even know it's having a conversation. There is no "intelligence" there. At its most intelligent, it's reflecting you to yourself. At its worst, AI is known to fuel delusions in its own users. As I reported in July, a man of sound mind and body found himself trapped in an AI-fueled bout of insanity as he turned to ChatGPT for help in both his professional and personal life.
As I wrote, some users have undergone what's known as the ELIZA effect:
People's interactions with LLMs [large language models] often involve some level of transference, or projecting feelings onto something we're confessing our feelings to. In therapeutic psychology, patients can often start seeing their therapist as an authority figure, parental figure, or even lover due to the authoritative and helpful responses given during times of emotional vulnerability.
When it comes to machines, this is called the ELIZA effect, only it's worse because, unlike a human therapist, the machine won't often correct the projection. It doesn't set boundaries and is always validating. Moreover, it can't be accused of fostering this with intent because it doesn't have the intelligence to do so. Again, it's just a fancy word calculator. People who fall into this psychosis are technically doing this to themselves, and an LLM is just the tool to do it with.
If AI is "hallucinating," then we should definitely be more careful about it, and remember that it's not actually an intelligence. It's just a fancy word processor.
I realize this can be difficult to grasp, especially as these chatbots become better and better at casual conversation and creative expression, but that's just the programs getting better at pattern recognition, not becoming more intelligent.
Use AI under the knowledge that you're not talking to anyone.






