"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs." Hallucination is Inevitable: An Innate Limitation of Large Language Models arxiv.org/pdf/2401.11817
The word 'hallucination' is itself a 'hallucination'. The LLM got it wrong full stop.
Please stop giving software human attributes. It is not human and will never be human. It may get better and better at 'understanding' and 'predicting' conclusions but it will always be software.
would be cool if the tech world stopped using the word hallucinate. hallucinations are perceptual. humans who experience hallucinations can learn to recognize them as such, whereas LLMs never will be able to do that. these models are producing output exactly as they are programmed to