phyphor's avatar

phyphor

@phyphor.one-dash.org

I've said this several times but it is worth repeating: You don't ask an LLM something, you are giving it a prompt. You don't get back an answer, you get plausible sounding text. Using the incorrect words for what they do allows the people peddling the technology to get away with ridiculous claims.

5 replies 437 reposts 884 likes


Lisa She | Her | They's avatar Lisa She | Her | They @chaucerandbeer.bsky.social
[ View ]

LLMs are drunken parsers and pattern-matchers. Hire a grad student instead. You will get much better results and the graduate student gets to eat.

1 replies 3 reposts 8 likes


Brian Orce (Twitter must be destroyed)'s avatar Brian Orce (Twitter must be destroyed) @jimnobu.tv
[ View ]

Full Self Driving

0 replies 0 reposts 4 likes


Leger-Felicite Snorlax's avatar Leger-Felicite Snorlax @segatape.bsky.social
[ View ]

it's autocorrect with good marketing

1 replies 0 reposts 18 likes


Casmilus's avatar Casmilus @casmilus.bsky.social
[ View ]

Yes, they're like columnists. Just generating plausible-sounding texts with no real understanding behind it.

3 replies 4 reposts 37 likes


shadowy apparatus's avatar shadowy apparatus @thwarted.bsky.social
[ View ]

A few weeks ago I said on Hacker News that LLMs are just beefed up Markov chain bots that use different data structures to efficiently store and process against a much larger state but operate the same exact way. Some techbros responded with "yeah, but they are thinking" or some shit.

1 replies 1 reposts 30 likes