They don't learn exactly like a human, but they do learn. They understand things. People keep moving the goalposts of what "understanding" means so they won't have to grapple with this. LLMs are not conscious, but they are intelligent.
people keep moving the goalposts in the other direction to make it seem more advanced than it is.
Yeah. If I give a program a set of words for colors, and ask it to give me a color, it will return a word from that set, and that is TECHNICALLY understanding. But does it know the concept of color?NO
But they DON’T understand. The LLM has no idea what red is, just how often that combination of letters is statistically likely to appear. It does not comprehend redness on any level except as a math equation.
They really don't, they are a statistical frequency analysis engine, they take the input, compare different outputs it has to similar inputs, then amalgamates the outputs into a single output based on pre-determined math. It doesn't understand the input or output, just the statistics