I've said this several times but it is worth repeating:
You don't ask an LLM something, you are giving it a prompt.
You don't get back an answer, you get plausible sounding text.
Using the incorrect words for what they do allows the people peddling the technology to get away with ridiculous claims.
A few weeks ago I said on Hacker News that LLMs are just beefed up Markov chain bots that use different data structures to efficiently store and process against a much larger state but operate the same exact way. Some techbros responded with "yeah, but they are thinking" or some shit.