“Hallucination” is the word being used to describe when AI bots like ChatGPT deliver convincing, but inaccurate information. “Fabrications” is a more accurate term.
These factual errors can come in any form and according to researchers, it can happen often (like, 30% and 40% kind of often).
Here’s an example: there are times when ChatGPT will get caught in a loop of false promises. For instance, by saying, “I’m working on it. I’ll update you on my progress.” And when the user asks for verification, say twenty minutes later, the robot insists, “Yes, I’m working through this. Thanks for your patience. It will be a little while longer.” In fact, none of these statements are true. ChatGPT doesn’t work on anything in the background. It ether gives an answer promptly, or not at all.
So perhaps this is another milestone in the development of near-human computers: the ability to promise that something’s being worked on when the work hasn’t even begun. Amusing, but a little sad, too. A spinning hourglass is one thing, but getting slow-walked by a computer isn’t something we expect.
AI chatbots are incredibly powerful tools. Frighteningly so at times. No need to shy from them, but also, not wise to suspend our own judgement, reasoning, and practical skepticism.
Of course, don’t just trust me. I dropped this text into ChatGPT and it verified that what I’ve penned is accurate, thoughtful, and well-written. Thanks, robot.