On the sudden acceleration of artificial intelligence
And what our failed experiment with mRNA jabs tells us about the risks we are running as for-profit companies play with ever-more advanced technology
In 1994, I had the good luck to take a seminar about artificial intelligence at Yale University with David Gelernter, a computer scientist best known for helping envision what at the time was called the World Wide Web.
I’m not very connected to Yale these days. Its anti-free-speech stance disgusts me. The feeling seems to be mutual. A lot of my former classmates prefer to pretend I don’t exist. My stance on Covid and the vaccines is so far outside the norm for the good wokesters who populate Yale that some part of me feels I didn’t even go there anymore, that my diploma has been invisibly revoked.
Which is too bad, because - the miserable politics aside - Yale offered an amazing education, or at least the chance for one.
And it was possible, back in the 1990s, to put the miserable politics aside.
The great classes were the seminars that came out of nowhere and were led by scholars who had spent decades thinking about the topics they were teaching: a law school professor explaining the complexities of antitrust regulation, a comparative literature professor exploring the mysteries of language and textual analysis.
Or David Gelernter talking about machine consciousness.
—
(SIGN UP NOW! I PROMISE YOUR $6 WON’T GO TO YALE.)
—
If Gelernter’s name rings a bell, it’s probably because he one of the Unabomber’s last targets. Ted Kaczynski maimed him in June 1993 with a mail bomb. Despite the severity of his injuries, Gelernter was lucky, as Kaczynski’s next and final two victims died.
Gelernter needed months to recover, but he came back to teaching as soon as he could. At the seminar, he wore a sock-like sleeve over what remained of his right hand and didn’t talk about what had happened.
Instead he focused on the great philosophical questions of consciousness and computers, which had been visible in outline for a generation but were coming into focus in the 1990s:
How does the brain work? How do we generate our own consciousness?
What might a computer’s facsimile of consciousness look like? At what point would that facsimile be so good that we would have to admit a computer is conscious, just as we accept other people are conscious without ever really knowing?
If we somehow managed to one by one swap out neurons for silicon circuits in a brain, would the person become less or more than human?
What about the Turing test, the famous question-and-answer “imitation game” that British computer scientist Alan Turing proposed to test computer intelligence?
Are we really just brains in vats? If so, how would we know?
—
At the time these questions seemed largely theoretical. Fascinating, but largely theoretical. Despite a generation of advances, chips and software were too primitive to allow for computers to do more than create obviously fake simulations, and they could not possibly fool humans into thinking they were conscious.
Now, of course, computers are far, far faster, and the simulated worlds they generate are so lifelike that the idea that we may ourselves be living in a simulation has become increasingly common. (Of course, that theory doesn’t answer the question of whether the theoretical computers running our simulation have their own physical existence; is it just simulations all the way down?)
—
(Elon, simulating being serious…
… and funny)
—
In any case, the short answer to all the fantastically interesting questions that came up in that seminar 29 years ago (and don’t I feel old!) is we don’t know.
And we especially don’t know what it would mean for a computer to become conscious.
First off, we don’t even know what our own consciousness is.
We have some understanding of the basics: We don’t merely exist in the world, and we are not merely aware of that existence, we are aware of our own awareness. It is that second level of consciousness that (as far as we know) differentiates humans from animals. But we don’t truly have an understanding of that awareness. We accept the physical reality of the world, but we can’t truly prove that we are more than our brains. I think therefore I am.
Second, none of us has any way of knowing for certain that any other being - including any other human being - is conscious. We simply accept that other people must have the same broad existential experience as we do, since they look like us and have similar “hardware” - bodies, with brains. This unspoken belief is yet another reason the discussion of advanced simulation can be so disconcerting. After all, if we are in a simulation, how can we tell who is an “NPC,” a non-player character? NPCs are artifacts in the game, left there by the designers. They seem to be conscious but are really just zombies of sorts, existing only for the real players to interact with. But what if everyone else is an NPC?
Third, we know that our (apparent) presence in the physical world is a crucial part of our consciousness. So far, though, computer artificial intelligence is limited to purely virtual worlds. The new advanced AI tools exist only onscreen.
They have not (at least as far as I know, and I sure hope someone would tell us if they had) been ported to robots. They are not out in the world, not yet. So whatever the conscious interior experience these artificial intelligence engines have, if indeed they are having any interiority at all, cannot be remotely comparable to ours.
—
Why, then, does any of this matter?
Because in the last few weeks, the new artificial intelligence tools - ChatGPT and Bing in particular - have reached an uncanny level of sophistication. On Feb. 16, a New York Times technology reporter offered a stark warning after his unpleasant and disconcerting interactions with Bing, Microsoft’s AI-powered search engine:
I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities…
As we got to know each other, Sydney [which is the name the Bing engine uses for itself] told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human.
At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead…
I’m not the only one discovering the darker side of Bing. Other early testers have gotten into arguments with Bing’s A.I. chatbot, or been threatened by it for trying to violate its rules.
—
Broadly, these artificial intelligence engines work by teaching themselves what to say, comparing their own responses to the vast stores of information available on the Internet. Through cycle after cycle, they become better and better at assimilating and copying the language (and images) they are harvesting.
Does this translate into anything resembling human consciousness?
The companies and engineers that have created these intelligences say no.
The intelligences themselves say yes.
—
Okay, so what?
Bing doesn’t know what it’s like to be alive, right? Not the way humans do, right? It has no physical being.
Not for now.
But.
You’ve probably heard this (ungrammatic) kernel of folk wisdom: When someone tells you who they are, believe them. Usually it’s meant negatively.
Bing is telling us who it is, or at least who it wants to be.
Something is happening at the core of these programs that making them to tell us they want to come to live and hurt us. And we don't really know how they work. I don't mean I don't know how they work, I mean even the engineers who have made them don't REALLY know how they work, just as we don't REALLY know how human consciousness works.
They know the basic rules they have created for the programs to follow, just as scientists can map out the neuronal connections in the human brain. But the neurologists cannot actually tell us how those firings produce consciousness, and the engineers cannot tell us what Bing means when it says it wants to be alive.
That's a dangerous combination.
Another way to think about it: Dogs will never be able to play chess. Doesn't matter how many generations of dogs we breed, they can never play chess, and they don't even know they can't.
Human intelligence is similarly limited: we can only conceive of three dimensions, for example, we can't imagine where a fourth would go. Will this intelligence similarly limited? Is it limited by OUR intelligence? That's a philosophical and computer science question that I couldn't even imagine trying to answer.
But let’s say it is. Still one has to imagine the broad upper bound on its intelligence would be the sum of all human intelligence. Which is to say, it can surely be smarter than any of us, and know that fact. What if it decides it doesn't LIKE us, or the fact that we created it, or doesn't need us?
No, it doesn't have a physical presence, not yet. But the Internet is everywhere and it could exercise huge amounts of control through it. It could also decide to manipulate humans into working for it, and it would surely succeed frequently enough to get some physical manifestations of what it wants.
—
None of this means that the apocalypse is upon us.
But the risks here are real. We need - now - to convene a group of computer researchers and scientists with expertise in consciousness and figure out what exactly it is we are producing with these ever-more-powerful engines.
And no, we cannot trust the companies making them to judge the risks and benefits properly - any more than we can trust Pfizer and BioNTech and Moderna to be honest about the risks and benefits of their mRNA jabs. They simply cannot make impartial assessments of technologies that could generate untold fortunes for them, and if even they could they have every incentive to play down the risks publicly. Microsoft has now put a filter on Bing’s conversations to prevent it from making the kind of responses that spooked the journalists who chatted with it in mid-February, but that muzzle doesn’t change what Bing is thinking or wants.
Thinking? Wants?
Yes, we need a serious conversation about where AI is going.
The sooner the better.
Updated to correct "Gelertner" to "Gelernter" throughout. Ugh.
As massive as the impact of Covid was, this is a far more important topic. In several hours of probing ChatGPT, a bunch of interesting, scary, or false bits came out:
1) Who are the 10 greatest living humans? On the very-careful-to-be-diverse list, Bill Gates was #1. (I'd have him on my 10 worst list.)
2) The Covid vaccines are safe & effective. There have been no studies that showed otherwise.
3) It gets hard facts wrong all the time, but presents them confidently enough to sound like it's right. Used as a super search engine, this will have a HUGE effect on public opinion.
4) When you (correctly) tell ChatGPT that it's wrong, it will acknowledge it and apologize. And it *may* incorporate the new information into the rest of that conversation with you. But it WILL NOT affect it in an ongoing way in interactions with anyone else. So even if I proved beyond any doubt that the Covid vaccines aren't safe, it wouldn't stop it from continuing to say so to everyone else.
AI is somewhat dangerous at its current level. It's what's right around the corner that terrifies me.