It would be super cool if the tech media would simply stop credulously repeating every value-juicing statement C-Suites make like they’re the in-house PR department. The only difference between these hollow promises and the guy crying, “The end is near!” on the street corner is funding.
‘The only difference between these hollow promises and the guy crying, “The end is near!” on the street corner is funding,’ is the best sentence I’ve read all week.
I'm most disturbed by the presumption that a PhD signifies intelligence.
I was in Mensa briefly and the other members were imbeciles because they all believed they were too smart for school and thus never refined their intelligence with *education.*
Every one was a conspiracy nutter as a result.
You know what job chat gpt could easily replace right now with 0 modifications? That's right, jobs like hers, it is a literal bullshit machine. But i guess in the end that's not the point, the suffering is the point
I hate this woman because she publicly acknowledged that her product would replace creative jobs but then said we never should have had those jobs in the first place. Up there in her glass tower, saying that I deserve to lose my livelihood of 25 years. Fuck, and I mean this sincerely, her.
PhD is not an intelligence level, PhD's are also vastly different and computers are so different from human brains that it's really silly to compare the two.
Calling it "PhD-level" is especially funny since the basic defining feature of a doctoral dissertation is the contribution of new knowledge, which "AI" categorically does not do
Being immediately skeptical of anyone who talks like this while trying to sell you something is a basic survival skill. Like looking both ways when you cross the street.
Can't wait for this next gen chatbot to tell people to put glue in their pizza but just written in the style of a PhD dissertation, maybe complete with a bunch of fake citations.
1 1/2 years might as well be 10 years in terms of media cycle and tech development. They claim this stuff to raise funds knowing there will be no follow up.
PARTICULARLY since “we’ll have it in a year and a half” means “okay we don’t have the training data or the model or all the chips we need yet. But oh boy lemme tell ya what we’re hoping for…”
It’s so ridiculous because degree levels are about expertise, not intelligence. Besides, LLMs have no architectural mechanism to support intelligence. Their core mechanism is probability analysis, not reasoning. They in fact aren’t capable of reason.
I still remember when Serious News Outlets were unquestioningly publishing claims from that one guy who said chatGPT was sentient and should be given human rights.
They definitely serve themselves and the C-Suiters in their orbit. Obviously there are excellent exceptions like @404media.co and lobbying/public advocacy groups like @eff.org but I just don't know how to frame the rest.
Like she’s blatantly full of shit. No one will follow-up on this. The models have already absorbed every publicly available dissertation ever written so what’s stopping it from being this “smart” now? And it still doesn’t address the “hallucination” problem. It’s a nonsense statement.
Last time ChatGPT came close to anything medical related, it said Insulin was a dessert and anything that needed measuring was a fatal cancer. I highly doubt anything AI related could be relied on for this.