we spent decades refining the model of “the computer gives the right answer” and I remain unconvinced people wish to replace this with “the computer gives the wrong answer, but in plausible-sounding words”
i do like how they are like fae. very powerful entities you can ask questions of who are just as likely to deceive you as help you and also you can bind them with cleverly worded requests to make them do something they don't 'want' to do
How I imagine AI:
Algorithm which intelligently responds to queries by choosing one of many powerful analytical tools to suss out the best, most relevant answer.
Actual "AI:"
Statistically these words frequently appear near these other words. Plus grammar.
Unfortunately consider how many business decisions are currently being made based on what the final set of numbers in Excel say, even though the people making those decisions often don't understand what the underlying numbers themselves mean.
As long as it tells them what they want they'll be happy.
I can remember my engineer father bringing home one of the first HP calculators, and the whole fam sitting around the dining table checking its answers. Look how far we've come.
I was a fly on the wall with a zoom with some techbro execs chatting how to 'disrupt' AI.
Near quote:
"I think there's a great opportunity to hire a team of fact checkers to identify the most popular queries and provide sources for reliable answers."
"Mitch, you just invented Wikipedia."
"...Fuck."
The computer gives you the correct answer to the exact question you asked, and when you ask it "what sounds like a plausibly natural response to this text?" this is the correct answer it gives to that question.
It tracks with how management and executive classes have gone from "I dont know everything, but Im semi competent at this job" through to "If I lie brazenly, convincingly and with sufficient bravado stock price go BRBRBRBRBRRBRBBR"
So this is a class of prompt that llms nearly always fail at, because they can't reverse order to edit the start of the response. People test as they go and then change strategies. LLM can't.
This has a couple implications, for better LLMs (agents, team of experts) and for jailbreaks on current gen
My favorite challenge to the LLMs is to ask it lore questions about your favorite TV show / video game / movie / etc.—something you know a lot about, in detail but isn't universal common knowledge—and then see just how much of it the AI gets wrong.
It gets a lot wrong.
Depends on who the “people” are you are referring to. Politicians need people stupid. Tech bros need to push their ai garbage. All the mega tech companies need relevance. In order to achieve this, they must shitify the internet.
Early computers did math calculations well (not, it's important to note, better than people, but just as well and faster, it's true).
The Internet introduced the computer as an "answer machine" but it's imperative that we understand it pointed to PEOPLE'S work.
AI is hallucination & regurgitation
The wild thing here for me is that Google Knowledge Graph is actually tremendous at delivering correct answers! But now Google is just, "But what if the answer was worse?"
(There are valid criticisms of Knowledge Graph stealing traffic from sites that rely on search, but it gives right answers!)
Sometimes the right answer is inconvenient or difficult to understand.
So you see, this is a perfect middle ground. A reasonable compromise that has market value for both company and consumer.
Resistance is unreasonable...and frankly, Luddite coded.