Colin's avatar

Colin

@colin-fraser.net

What you're really asking the LLM to do when you ask it to generate text is to pretend that the text exists and set it to work reconstructing the pretend text. That sounds very much like what you're asking it to do is to hallucinate.

1 replies 1 reposts 5 likes


Colin's avatar Colin @colin-fraser.net
[ View ]

From this perspective it seems plausible to describe _all_ generative AI output as "hallucinatory". This has some challenging implications. If all LLM text is hallucinatory then how do we eliminate the hallucination problem? (I don't know)

1 replies 0 reposts 2 likes