I am seeing more and more in the media these days about the “dangers” of AI or AGI (artificial intelligence or artificial general intelligence). What if it “gets out of control”? What if it “decides that it doesn’t need humans”? References to Skynet abound.
It astonishes me (I am getting so very tired of being astonished) how utterly stupid most of us are. AI is going to kill us? I’m rolling on the floor laughing. You can’t be serious.
No matter what anyone tells you, I will clue you in on the truth. AI can only do exactly what we empower it to do. It can’t kill us. We are its masters.
Let me put this as simply as I can as it seems a bit difficult to grasp for most people.
Suppose there is a switch in front of us. I built this switch and if I throw the switch, it will blow up the world.
I throw the switch. The world ends.
Now—and pay attention here—who or what blew up the world? The switch? Or me?
Getting it through our thick skulls
OK, so if your answer to the above was that the switch destroyed the world, then please go home, turn off your Internet, and just wait for the end. You are unreachable. Your level of stupid is terminal.
But I’m guessing that you correctly sussed that it was me who did the dirty deed. The switch was only the instrument of that deed, not the doer.
But… but… but… I can hear you saying already, what if the AI can think for itself?
Excellent. We can clear that up right now. AI is never, ever going to “think for itself”. Get over this nonsense. AI is a machine, and machines can only do what we create them to do.
So suppose I built the switch so it was a kind of “Schrödinger Switch”. It would sit there doing nothing, and then maybe at some unpredictable moment, it would throw itself (hah) and end the world. But I walk away not knowing if it will ever flip.
Now did the switch kill us all? Or did I?
I hope you’re smart enough to say that I did. The end of the world is still my fault. I, not the switch, am culpable.
Now my “Schrödinger Switch” might feel like it is making its own decisions, and it is certainly making a hands-off decision. But it can make that decision only because that’s what I built it to do.
Are you starting to catch on? Any lights going on upstairs?
We will kill ourselves
AI (or AGI, if you prefer) may well be the instrument by which we destroy ourselves. That is if we don’t do it with greenhouse gases, nuclear weapons, plastics, toxic chemicals, deforestation, bioengineered weapons, gene manipulation stupidity, killer robots, or any of a dozen other potentially world-ending technologies.
What should be clear to anyone paying attention—should have been clear for decades at least—is that we are going to kill ourselves, and quite possibly all life on Earth. We’ve been accelerating—accelerating—toward that apocalypse my entire life. I can’t tell you how surprised I am that it hasn’t happened already.
But we are asking the wrong questions and we are looking in entirely the wrong direction. On purpose, I suspect. And we’ve been doing this since, I dunno, Adam? We have an incredible (and tragic) blind spot: we refuse to see our own culpability.
All of us look for the source of our troubles outside ourselves. The problem must be out there. No way it could be “in here” because that would mean that I have to grow the fuck up and become an adult.
And that, as we all know, is a fate worse than death. Which is why we are collectively choosing death. You can look this truth in the eye like an adult, or you can retreat into your inner infant and pretend you can’t see it. But nature doesn’t give a fuck. Continue down this path and we all die. No mercy. No second chances.
Ask the right questions, dipshits!
A friend sent me a link to a presentation on the “dangers of AI” in which a bunch of infants sat around wringing their hands and bleating “What can we do?”
I’ll save you the suspense: there is almost certainly nothing at all we can do. All of our history—all of it—shows that we never learn from our mistakes. We never pull back. We secretly believe that someone—maybe Mommy or Daddy—will pull our nuts out of the fire before they singe.
But that has never happened. We have consistently, virtually without exception, looked down that path, seen what was coming, and then rushed toward it rather than away from it.
Apparently, we have a universal death wish.
Hand wringing and gnashing of teeth
So I’m watching this presentation and they put up this statement (and then immediately below it show that it is false and deceptive). This is the statement:
50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI
Ha, ha. You can’t make this shit up, as I say so frequently. We’re so far ahead of fiction we might as well just give up on it. Wave to it. Bye, fiction!
Do you see the problem? The problem is that they (these experts) think that the problem is… wait for it… AI.
Here these people are—the very people who are building the “switch” that ends the world—and they are trying to figure out what’s wrong with the switch. How do we keep the switch from killing us?
But the switch isn’t going to kill us. We are.
What the fuck is wrong with these people? If AI is going to kill us all, then why are you building it? Why did you build nuclear bombs? Bioweapons? A capitalist consumer economy based on the utter destruction of the planet’s resources—the same ones we need to survive?
Why the fuck are you building anything with the potential to destroy all life on Earth when you know that we are all utter infants and incapable of controlling our urges, including our most savage and brutal ones?
What planet are these people from?
Walt Kelly was a prophet
Pogo famously told us: We have met the enemy and he is us.
That was years before I was born, and I’m practically Methuselah. But if we’re honest with ourselves (we won’t be), we’ve known this far, far longer. William Golding captured it perfectly in Lord of the Flies:
Fancy thinking the Beast was something you could hunt and kill! You knew, didn’t you? I’m part of you? Close, close, close! I’m the reason why it’s no go? Why things are what they are?
I read that in high school and it scared the bejesus out of me and it was twenty years old then. That scene changed my life. Sadly, no one else seems to have been paying attention in class.
Will AI be used to destroy us all? Maybe, but only if the idiots can get there first. My money is still on nuclear war (if you thought that had gone away, you’re really not paying attention) and then on climate change, if we somehow escape the inferno.
But one of them will. There is no doubt of that. Because we are infants who refuse to become adults. And only adults can handle the extremely powerful technologies we’ve invented without getting themselves killed.
If you want to change that—I sure do—then stop worrying about bombs or greenhouse gases or killer robots or, yes, AI and start figuring out how to get humans to grow the fuck up and act accountably.
Fix the humans. It’s our only chance.
And the only way to do that is to become an adult yourself. You cannot fake it. You have to do it. Walk the walk, as they say.
It is almost certainly too late. Maybe you just want to play with your toys a little more before the game is over. That would be sad, but it seems to be the overwhelmingly popular choice. Let Mommy and Daddy fix it.
But if we are to have any hope at all, we need to move the focus off of the “toys” and back onto the “boys”.
Don’t let people tell you that you just don’t understand AI. It does not matter. The people who think they understand AI are the very people working assiduously to destroy all life on Earth—and they even know it! They openly admit it.
To save ourselves it is not AI or any other technology that you need to understand. That’s just a diversion. It’s the magician’s other hand distracting you while she steals your wallet. To save ourselves it is ourselves that we need to see and understand. And fix.
There are no answers “out there”. The answers are all “in here”. After all, if you can’t grow up and behave, then what chance is there that anyone else will?