This is my complaint as well. Hallucinating is a ridiculous term that AI people use to make it seem more human. Hallucinating is a sensory illusion. An llm does not have senses. When a standard search gets something wrong you don’t say “hallucinating.”
Really dumb to play along with their framing!
IMO the fundamental metaphor we should all be referencing w.r.t. to inaccurate LLM statements is 'bullshit', as described by Harry Frankfurt [en.wikipedia.org/wiki/On_Bullshit].
That said, it's actually not even bullshit; bullshit still connotes intent. It has none.
I’ve been saying “lying” - it’s still incorrect because there’s no intent behind it, but people flinch more about that. Maybe it’s because we’re so infrequently calling out real-world lies from humans as actual lies, that we don’t hear it as often.
You’re not wrong to challenge the whole human-ness of the tech - it’s Microsoft Clippy 2024 - but “hallucination” is actually a euphemism more than anything. There was no other press-ready label to apply to “gives a response so fucked up that it’s on par with a fever dream and not a search result”