sophie malice's avatar

sophie malice

@sophieactual.mitsuko.nz

primer for how LLM's work this is grossly simplified, k acquire huge volumes of text — it's a large LANGUAGE model — but don't bother about the context, that's an out of domain subtlety yet to be addressed this is a ginormous array of lexemes, inappropriately constrained to mean words 1/4

3 replies 1 reposts 5 likes


sophie malice's avatar sophie malice @sophieactual.mitsuko.nz
[ View ]

¹that was the fuckup, human intelligence doesn't rand(), it attempts a closeness within the author's domain of understanding, which can be interpreted by the broad context of the author's writing 3/4

1 replies 0 reposts 4 likes


sophie malice's avatar sophie malice @sophieactual.mitsuko.nz
[ View ]

make another array of the lexeme that's most likely to occur after the current lexeme oops, lots of them have equal weighting use rand() to choose¹ the next lexeme branch back and consider the total result still statistically correct, continue 2/4

0 replies 0 reposts 4 likes


Cis PeeBee🏳️‍⚧️🏳️‍🌈BLM🍉's avatar Cis PeeBee🏳️‍⚧️🏳️‍🌈BLM🍉 @peebeejaybee.bsky.social
[ View ]

For my computer science undergrads, I set part of the article "Chat GP is Bullshit" as an exam text

1 replies 0 reposts 4 likes