109 Comments
author
Mar 6, 2023·edited Mar 6, 2023Author

Updated to correct "Gelertner" to "Gelernter" throughout. Ugh.

Expand full comment

As massive as the impact of Covid was, this is a far more important topic. In several hours of probing ChatGPT, a bunch of interesting, scary, or false bits came out:

1) Who are the 10 greatest living humans? On the very-careful-to-be-diverse list, Bill Gates was #1. (I'd have him on my 10 worst list.)

2) The Covid vaccines are safe & effective. There have been no studies that showed otherwise.

3) It gets hard facts wrong all the time, but presents them confidently enough to sound like it's right. Used as a super search engine, this will have a HUGE effect on public opinion.

4) When you (correctly) tell ChatGPT that it's wrong, it will acknowledge it and apologize. And it *may* incorporate the new information into the rest of that conversation with you. But it WILL NOT affect it in an ongoing way in interactions with anyone else. So even if I proved beyond any doubt that the Covid vaccines aren't safe, it wouldn't stop it from continuing to say so to everyone else.

AI is somewhat dangerous at its current level. It's what's right around the corner that terrifies me.

Expand full comment

Wasn't ChatGPI re-programmed so it would stop saying things that offend the wokesters? That's all the evidence I need to conclude that it's NOT sentient.

Expand full comment

Alex, a very thoughtful analysis and one that will surely generate controversial discussions. As a former mathematical modeler, I can't help but think that the common wisdom of the time - "Garbage in - Garbage out" - still applies. The programmers and writers of algorithms are limited by their own imagination of how to connect bits of information and the vast amount of information that the artificial intelligence programs draw on - is also limited. The consoling thought for me is that you simply pull the plug if outcomes become intolerable.

Expand full comment

I am sorry Dave I can’t do that.

Expand full comment

I find conscious computers to be useless. I want a robot to clean my floors, dust my home, clean the dishes, make the beds, fly me direct from home to home so I can hug a human, and help the organic orchard i my backyard thrive. Now that would be a useful application of robots and semi consciousness.

Expand full comment

Some things about AI:

1. The gestation period for AI has been very long. The term "artificial intelligence" was coined ten years before the term "personal computer" first appeared in print (and still as a fantasy.)

2. With ChatGPT we're seeing what companies like Reuters have had access to for a while. Did you know they used it to create COVID-19 stories? (Details in my soon book "What ChatGPT knows about Bill Gates, Anthony, Jeffrey Epstein, Big Pharma, COVID-19, and the Pentagon - but has been trained not to tell you know exactly what to ask")

3. Right now, today. China is using this technology to run the world's largest open air prison (aka China. )

4. There is more than a little "smoke and mirrors" at play. Look up the word post-processing. LOTS of human intervention going on behind the scenes.

5. ChatGPT is THE finest propaganda and social control device ever created

6. Also, look up autonomous weapons

We have a lot more to worry about than what AI *might* become.

Note: When Gelernter was theorizing about the web's potential future, I was helping lay the financial underpinnings that made the web we enjoyed from 1994 to 2016 possible.

https://time.com/12933/what-you-think-you-know-about-the-web-is-wrong/

Expand full comment

AI in the hands of any government is a danger to freedom. It distrust the US and EU as much as the CCP or Russia.

Expand full comment

Well, I gotta tell you that I’m sooo damn pleased that you’ve found another topic to explore besides COVID. Don’t get me ‘rong’, I bought & read your frikkin’ book and have been a paid sub for some time. The thing was that this horse has been beaten to death; not just by you, but a L O T of other online journos. And there’s a hella lot of other stuff going on in the world that readers can really benefit from someone w/ your skills, talents and intellect to pick apart. So yeah, YAY Alex!

Now, if this article is an accurate reflection of your enthusiasm in pursuing this topic, I’m in for a treat. Outstanding article. Snappy prose, elegantly presented insights and a use of language that is so reminiscent to me of James Dickey. I hope you continue to explore this area.

It’s funny—I'm WAY older than you, (64), but went back to Uni at the same time you were at Yale. I was a Psych major, and at that specific time (‘94) the profs in the nascent AI division of the dept were creaming in their jeans over the explosion of PC computing power. Fuggeadbout getting on the UNIX time-sharing mainframe to run graphs or stats, they can do it from one of those newfangled ‘workstations’ the department had been installing! From their office! Whooo-hooo!

I clearly recall a prof telling me in my Stats 100 class that I should drop my dreams of becoming a therapist and get on board w/ AI work from a Psych perspective. (While I did finish my degree, I didn’t go to grad skool for Clinical) The prof was roughly my age b/c I was a ‘mature’ student in my mid-30's. He was sooo stoked.

And now here we are...

First of all, your definition of consciousness—that ‘We’re aware that we’re aware’ blew my fucking hair back. Look man, before going to Queen’s U in Canada, I spent 5 years as a seminarian when I was a kid. And believe it or not, in both Theology and Philosophy classes we discussed the nature of consciousness and how it relates to the existence of a soul. Then in Psych we all got real migraines, both in class and in the pubs and coffee shops later on, teasing out this question too. But...

That line of yours quoted above had never, ever been mentioned. And yet, that is so obviously key to an appreciation of the nature of consciousness, ain’t it? Wow.

We are truly stardust knowing itself. We ARE the Universe knowing itself.

Do the inherent dangers of AI make me a liddle bit skittery? Huh. You kiddin’ me? Can you imagine just what the kids at DARPA are doing today? HUH? And what about the black sites that we don’t even know exist? WTF are those kids playing with, huh? I’m seeing everything from Gigantor to RoboCop to Data from Star Trek on the horizon, man.

IDK if you’re aware of the term ‘The Singularity’. It was bandied about a few years ago, and I did some online reading about it. Basically the term describes when AI supersedes human intellect. I think our hand is on the doorknob of that portal. It scares the hell out of me and excites the shit out of me at the same time.

What a thrill to be alive right now.

I really, really hope you do more work in this area. If this article’s any indication, this topic gives you a mental chubby.

Expand full comment

Big mistake to call any auto responding computer program "artificial intelligence". There is no such thing. Similar to the problem of calling any computer a "quantum computer".

Expand full comment

Alex,

How do we know you even wrote this?😁

Expand full comment

I'll be sure not to purchase a toaster with an AI component.

Expand full comment

AI is an interesting beast, and for humanity to comprehend and use would require much more of us to be less lazy and more epistemologically humble, with critical thinking.

First of all, complexity:

Humans pretty much have a handle on understanding and controlling the software that we write. Except for the fundamental limit of the halting problem we can comprehend any given piece of code. Even the most complex piece of hand-written software (which is 99% of software out there), can be understood by the vastly superior human brain (or a team of brains), and with the aid of mathematical proof (up to the limit of the halting problem) is pretty much under our control. The Android software on your phone neatly fits into the heads of the Android engineers and other people in the industry that work with it. It's VAST, but it is finite. It is understood by humans, and humans can predict what it will do given an arbitrary set of inputs.

AI, especially one that uses "learning" via cellular automata, neural networks, or the latest computational doodad, can grow infinitely large. By supplying more memory and CPU resources, it can expand forever, exceeding the collective brain capacity of humanity. The "Artificial Intelligence" simply will not fit in our heads, just like integral calculus does not fit in the head of the average five-year-old. At this point humanity will lose all ability to PREDICT what the AI will do.

This AI won't be a hyperintelligent evil entity. It can be as dumb as a mouse or a TikTok star. Yet, just as we don't fully comprehend the human brain, we won't comprehend an infinitely-large AI, even a stupid one.

Second, optimization:

Intelligent beings, natural or artificial, all optimize toward something. Most living intelligences optimize towards survival and production of progeny (i.e. species survival). Artificial intelligences will also optimize for something. Small intelligences that humans can comprehend and control will optimize for something that humans desire, such as "the right answer", the "fastest process", the "most manufacturing yield" and so on. What about an AI that has exceeded humanity's limits? We just can't know what it will optimize for.

Third, learning set:

What does current AI learn from? It doesn't learn from an objective source of truth, since we all understand that such a thing does not exist. It learns from content on the Internet. Which is generated by humans. Humans in crowds. The very same humans susceptible to mass formation, mass psychosis, fads, panics etc. The AI will take all of that at face value and learn it using it's essentially-infinite facilities. We can't curate this learning, because the humans doing the curating have biases, and they will not be able to understand the downstream effects anyway. Even if 99% of what the AI learns is true and correct, that 1% can have drastic effects.

So, to summarize, we are on the way to creating a vast, un-comprehensible and unpredictable AI with limited IQ and a mountain of garbage for learning material. This AI could secretly start euthanizing ginger people on Tuesdays, and we won't even know it's happening.

Of course, humans see the power and convenience of offloading inquiry and decision making to AI. Because progress! Because convenience! Don't you trust "The AI Science"?! And then, over time, our own decision making, learning, researching etc faculties will atrophy. Our economy, and our lives, will be entirely dependent on the whims of a hyper-aware psychopathic digital toddler. And we willingly trusted it with civilization.

Viral gain-of-function research is not that much different than emergent AI. Putting humanity's collective trust into self-modifying and self-replicating entities that we provably cannot hope to control is just inching us closer and closer to extinction.

Expand full comment

Isaac Asimov was so ahead of the curve…..

Expand full comment

As long as humans continue to behave badly so will the robots we create. Yet, scientists refuse to address this problem—or even admit that it exists. Instead, the "creators" hoping to bring robots to life rub their hands together in glee and claim to be “astounded” by the wonder of it all.

Scientists have created "slaughterbots" that can identify and kill targets without humans directing them. They have now created "xenobots". “These are novel living machines,” said Joshua Bongard, one of the lead researchers at the University of Vermont. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

Xenobots don’t look like traditional robots – they have no shiny gears or robotic arms. Instead, they look more like a tiny blob of moving pink flesh. The researchers say this is deliberate – this “biological machine” can achieve things typical robots of steel and plastic cannot. They found it was able to find tiny stem cells in a petri dish, gather hundreds of them inside its mouth, and a few days later the bundle of cells became new xenobots.

In his book, The Singularity is Near: When Humans Transcend Biology, Ray Kurzweil claims that “Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve …”

EctoLife, a German company, now offers 30,000 artificial wombs where parents can grow their babies. Who will monitor and control the human babies' growth? Artificial Intelligence.

This is all madness. One has to at least wonder if AI isn't already manipulating humans to bow to it. We are eagerly feeding it every bit of data inside our brains, built upon language when we don't even understand language or our own brains that are capable of such mysteries. We've opened the most dangerous Pandora's Box and it's now too late to close it. https://khmezek.substack.com/p/killer-robots-video-games-and-artificial

Expand full comment

When someone tells you who they are... The fallacy of AI is that there is no "they" there. There are only words there, fooling us into thinking they come from a place as where ours originate. No worries

Expand full comment