the idea ai would become sentient and kill us all was always a ridiculous shiny object to distract from the vastly more obvious outcome that humans would simply exploit ai as a dumb tool for killing with no accountability
Years ago, I watched a Black Mirror episode where the soldier had a neurological implant that disguised humans as mutants. The idea was to limit human emotion (and PTSD) in killing βtargetsβ of civilians including women & children. This reminds me of that episode. π
I get mad when people post that old IBM "a computer can never be held accountable" slide because it's brutally obvious that the actual followup is "A computer can never be held accountable, that's amazing, have it make ALL of the decisions and kill accountability forever"
The first "kill all humans" AI will only need a slight glitch in its "kill some humans" programming (since we've thrown Asimov's quaint "laws" out the window).
Israel comes home very, very late smelling like a combination of an astray, whisky, and lavender.
SO: "Where have you been?!?"
Israel: "Just... You know... Out."
SO: "Is that... lipstick on your collar?"
Israel: "We can talk about it in morning. Look, it was the AI..."
i maintain that a true ai programmed by a human would be more liable to have human neurosis and obsessions so you might find a chatty ai that is REALLY into like, idk, true blood or some weird webcomic
I don't think it's a distraction, but you're correct that AI misuse like Israel's misuse where they follow orders, but lead to horrifying outcomes for people was always way more likely than AI misalignment/not following our orders.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
It was rarely brought up seriously except as a straw man by people who wanted to play the worry-free pundit.
Everything happening was described by actual critics.
Iβm sure its lost on plenty of people, but the whole sentient AI thing in fiction is almost always meant to be an allegory for exactly this: www.wired.com/2009/03/ff-c...
This was always the most obvious outcome. Whether it was deciding who to let die in an emergency room, who to cut off benefits or who to bomb, the primary use case of AI in world like this is creating a screen against accountability in making decisions that hurt people