How does it remain so hard for people to realize machines don't 'learn' like a human. There is no intelligence or comprehension there. All these tech terms are more magic words for most people, and the industry is happy to profit off of the misconceptions.
They keep using terms from bio/psych to muddy the hell out of the water, and then no one pushes back on any bullshit claims that brains "compute" or "process information" which makes it easy to go "well if brains compute, the computer is just a different way of doing the same thing!" Dumb as hell
It frustrates me on so many levels, not least of which is that a lot of these programs were genuinely impressive feats of code, but instead of appreciating that somebody made a really good Roomba, people insist that the Roomba knows what dirt is and hates it.
"AI" has referred to many different things over the years.
Initially it was the bleeding edge of software engineering.
For a while it was real logical reasoning and learning.
But since the 90's "AI" has meant parrots with memories the size of libraries coupled to less brains than a stoned ant.
They don't learn exactly like a human, but they do learn. They understand things. People keep moving the goalposts of what "understanding" means so they won't have to grapple with this. LLMs are not conscious, but they are intelligent.