And if you are wrong in your assertion that AIs understand nothing (and, by implication, never will), you spread false hope, and people will be woefully unprepared for the arrival of actual AGI.
I'll repeat that as someone with skin in this game, the methods deployed for LLMs and adversarial networks will *never* become AGI. The premise of just dumping in more training data till consciousness pops out is fundamentally flawed. There are so many other things to worry about first.