Jeff (Gutenberg Parenthesis) Jarvis's avatar

Jeff (Gutenberg Parenthesis) Jarvis

@jeffjarvis.bsky.social

"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs."
Hallucination is Inevitable: An Innate Limitation of Large Language Models
arxiv.org/pdf/2401.11817

5 replies 44 reposts 106 likes


DW's avatar DW @dustyworm.bsky.social
[ View ]

The word 'hallucination' is itself a 'hallucination'. The LLM got it wrong full stop. Please stop giving software human attributes. It is not human and will never be human. It may get better and better at 'understanding' and 'predicting' conclusions but it will always be software.

0 replies 0 reposts 8 likes


Coup Otter's avatar Coup Otter @therealjimsanto.bsky.social
[ View ]

very interesting, thanks

0 replies 0 reposts 0 likes


Dave Winer's avatar Dave Winer @scripting.com
[ View ]

Wikipedia hallucinates, as does journalism, as do humans.

We have skepticism, a defense against hallucinations.

If your mother says she loves you, check it out, said a wise philosopher.

quoteinvestigator.com/2021/10/27/c...

2 replies 2 reposts 3 likes


rhaco_dactylus, phd's avatar rhaco_dactylus, phd @rhacodactylus.bsky.social
[ View ]

would be cool if the tech world stopped using the word hallucinate. hallucinations are perceptual. humans who experience hallucinations can learn to recognize them as such, whereas LLMs never will be able to do that. these models are producing output exactly as they are programmed to

0 replies 0 reposts 8 likes


Alan Bleiweiss's avatar Alan Bleiweiss @alanbleiweiss.bsky.social
[ View ]

And thus LLMs have no value to ordinary people, and in fact, they are toxic and a critical new problem for humanity.

1 replies 0 reposts 5 likes