Scholars call BS on ChatGPT like:
“Large language models have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”
"We argue that these falsehoods [are] better understood as *bullshit* in the sense explored by Frankfurt (On Bullshit, Princeton, 2005)"
The great irony of the paper is that he is attributing to the model the ability to hold the concepts of "disregard", of "trying to sound knowledgeable", of trying to please the header, etc. He's doing far more to anthropomorphize LLMs than all but the most hardcore backers.
LLMs have no interest in being truthful, nor do they have any concept of truth. It’s just not how they work. So yes, the very definition of a bullshit generator.