I’m no software engineer, just a humanities person who spent 20 years of my life obsessively studying the question “Can reason be automated?” and feeling extreme distress over the catastrophic consequences, for all of us, of this particular iteration of (re)discovering that the answer is “No”
This particular iteration, if I infer correctly, doesn't deal with reason, just kiddo garbage scraped off the internet. On a tangent, you might be interested in this, which tries to separate reason from language. www.nature.com/articles/s41...
Perhaps a better answer is “maybe”, depending on how we define reason.
If two outcomes can be quantitatively compared via formula in well-defined & well-understood problem spaces, then “likely”, otherwise, “no.”
(Admittedly, I don’t understand why the concept of automating reason would cause distress.)
But: current tech is a combination of language parsing, language generation and media generation. Ironically, it is the people not working on it who seem to be trying to make it more than that.
These incredibly elaborate and resource-hungry imitations of cognition & language use exploit our proneness to see meaning in meaningless configurations and to posit homunculi where there are no minds at all. Both their power to deceive, and their enormous resource requirements, are lethal dangers.