|
@rst.bsky.social |
It's like any other interaction with an LLM - what you get is the model's best guess at what a plausible answer would look like, with no checks that it's actually true. More a smooshed average of human explanations for similar behavior than any real self knowledge.
0 replies 0 reposts 28 likes