|
Tica the Sloth@ticasloth.bsky.social |
Kind of like chicken:human:: autocorrect:LLM I'd need to look into it more
4 replies 0 reposts 0 likes
|
Tica the Sloth@ticasloth.bsky.social |
Kind of like chicken:human:: autocorrect:LLM I'd need to look into it more
4 replies 0 reposts 0 likes
|
Steven desJardins
@stdesjardins.bsky.social
[ View ] |
It's an algorithm. A set of rules that produces a result. Calling that intelligence is like rolling dice to find a path through a Choose-Your-Own-Adventure book and calling the dice a writer because they're making creative choices. The creativity is in the input, the written book, not the dice.
2 replies 1 reposts 48 likes
|
Winter (Summer Edition)
@wintersky.bsky.social
[ View ] |
I think you need to look into a number of things more. Your posts are heavily anthropomorphizing LLMs based on the fictional norms of sci-fi 'AI'. They're not remotely similar. Calling an LLM an AI is a marketing tool to sell an overpowered autocomplete trained on stolen content.
0 replies 0 reposts 1 likes
|
Justin
@convivialjustin.bsky.social
[ View ] |
Think of it like a calculator. You type inputs (4 x 5 =) and you get outputs (20). Would you say that a calculator understands things? LLMs are just programs and that's why they get it wrong so often.
2 replies 0 reposts 7 likes
|
Some Guy*
@betterkevin.bsky.social
[ View ] |
Do you think it matters how they work? Looking at a sample of text and counting what word most often comes after each word meets the criteria of "figur[ing] out what words are more likely to go next to other words."
0 replies 0 reposts 1 likes