Colin's avatar

Colin

@colin-fraser.net

Classical ML systems are deployed to make the exact same kinds of guesses that they are trained to make. A digit classifier looks at a digit and outputs a guess about the digit, which is either right or wrong. But when an LLM makes a prediction, there's literally no right answer.

2 replies 1 reposts 5 likes


james 's avatar james @teddybrosevelt.bsky.social
[ View ]

Beautifully stated - so much of the brutally bad deployment of ai is because of our bias towards anthropomorphism, it’s just too good at looking like it’s thinking instead of rolling dice with words on the sides.

0 replies 0 reposts 1 likes


Colin's avatar Colin @colin-fraser.net
[ View ]

What we care about from an LLM chat bot is the truth of the propositions that *emerge* out of the combination of a whole bunch of distinct predictions, each of which having no well-defined notion of right or wrong.

1 replies 0 reposts 1 likes