Colin's avatar

Colin

@colin-fraser.net

I think hallucinations from generative AI are in fact an entirely distinct phenomenon from "errors" in the classical ML sense. The reason is that, although Generative AI systems and classical supervised learning systems are constructed in the same way, they are deployed completely differently.

1 replies 0 reposts 1 likes


Colin's avatar Colin @colin-fraser.net
[ View ]

Classical ML systems are deployed to make the exact same kinds of guesses that they are trained to make. A digit classifier looks at a digit and outputs a guess about the digit, which is either right or wrong. But when an LLM makes a prediction, there's literally no right answer.

2 replies 1 reposts 5 likes