Colin's avatar

Colin

@colin-fraser.net

From this perspective it seems plausible to describe _all_ generative AI output as "hallucinatory". This has some challenging implications. If all LLM text is hallucinatory then how do we eliminate the hallucination problem? (I don't know)

1 replies 0 reposts 2 likes


Colin's avatar Colin @colin-fraser.net
[ View ]

Finally in the last section of the essay I dig into the challenges, technical and conceptual, of attempting to quantify the impact of a generative AI system's propensity to generate false or undesirable output. It's a lot harder than it seems like it should be.

0 replies 0 reposts 1 likes