Turns out a lot of Unreported Truths readers have been thinking a lot about AI
And your takes are fascinating.
I expected a strong response to Friday’s post, “We need to talk about AI.”
But nothing like this. Almost 600 of you have posted comments on the post already. Dozens more have emailed me directly. And your views are striking. Many of you predict artificial intelligence will drive big economic changes - and soon.
You aren’t guessing, either. More Unreported Truths readers than I expected have significant firsthand experience with AI, either as software developers or corporate executives who are experimenting with and in some cases now relying heavily on AI.
—
(Support Unreported Truths! It’s not just reporting, it’s a community.)
—
Your fluency with emerging technology shouldn’t have surprised me.
I know firsthand the legacy media is wrong when it smears readers of independent reporting as conspiracy theorists, Luddites, or right-wing cranks. The opposite is true, at least for Unreported Truths. Still, the depth of your engagement with AI was telling.
And of the 3,600 readers who responded to the poll question “Will AI transform our jobs and lives in the next decade?” fully 70 percent said yes - compared to only 9 percent who said no. (The rest didn’t know enough to say.)
This conversation has only begun, and I expect to highlight some of the most interesting emails and posts in the next few days.
But first, here’s a loose framework for thinking about AI’s potential threats that you helped crystallize for me.
It seems to me that in discussing AI, we are conflating three different issues around its risks.
The first is what might be called the Matrix risk, the existential threat that the core software will become conscious — or approximate consciousness in a way that at least to us is indistinguishable from consciousness.
In other words, that the intelligence itself will “wake up,” and then actively seek to destroy humanity, either because it perceives human beings as a threat or because it simply views us as parasitical and useless.
How serious a threat is this scenario? I can’t begin to guess. But even if it is only a tiny risk, we must pay attention to it because the outcome would be the deaths of everyone on the planet. Including, not to sugarcoat, you. And your kids.
—
(Turns out it was a documentary!)
—
The second is what might be called the WarGames risk - that the United States, China, and other nation-states will give too much control to autonomous military systems, including drones and robots, in an effort to retain dominance, and that we will shortly find the systems are too efficient at killing and cannot be stopped. (These two seem very similar, but they are crucially different. In the Matrix scenario the machines kill us all without our direction and/or approval; in the WarGames scenario, we push the button.)
Sunday’s successful drone attack by Ukraine on airbases deep inside Russia shows just how capable drones have become, and militaries face increasing pressure to use AI in more and more of their systems both to reduce the risk to warfighters and for tactical and strategic advantage.
This risk is likely more real and urgent than the first, but it is also likely to be more manageable by treaty and top-down control. We and the Soviet Union had the ability to incinerate the world during the Cold War, but we managed to avoid launch. That said, big countries - the United States and China in particular - urgently need to start negotiating on this issue.
—
(The only way to win is to subscribe to Unreported Truths!)
—
Finally, the third risk is economic.
Your emails make clear that AI’s threat of disruption to the white-collar workforce is not just real, but already beginning to happen.
AI’s skeptics say those changes are overstated.
AI’s bulls point out that since the 1700s, humanity has had massive shifts in labor as first farming and then industrial production became far more efficient, and the net result has not been massive unemployment and starvation but societies that are wealthier and healthier than any human being could have imagined even 200 years ago.
And AI’s (economic) doomsayers will note that those changes occurred over centuries, not a few years, and besides it is not clear what will be left for white-collar workers to do if artificial intelligence automates knowledge work.
I don’t have the answers, but I’m glad I asked the question. For now, I’ll take UTRI (Unreported Truths Reader Intelligence) over AI any day.
Next year, much less 2035? I guess we’ll see.
More to come, of course.
—
(What you thought)
Without a doubt AI, for good or bad, has already significantly changed the way all of us (whether you know it or not) will interact with the new reality of the world we live in.
The main question for me is:
Will AI eventually become sentient?
I mean all sapient beings are sentient (us, and perhaps dolphins or elephants) , but not all sentient beings are sapient. Perhaps it could actually break the relationship between the two.
There's a chance, in my mind, that AI itself, could redefine or blur the difference between the two. For example, notwithstanding subjective "feelings", its possible that the rest of the attributes (of both) could eventually be attained and "feelings" could be faked to the point, as humans, we wouldn't know the difference.
And if this is the case, it becomes "real", because if we're duped we may think it has a subjective sense simply by us treating it as if it had that capability.
In that scenario, it would actually change reality as we know it, even if AI knows it doesn't have that capability but can fool us.
I mean if AI can fake it to the extent we don't have the capacity to tell whether its fake than it might as well be real to us.
In the interest of time (and annoyance) this is a rudimentary explanation of what im trying to articulate, but it seems to me that if you just ask that question, and then expand the thought process in a more sophisticated manner it opens up a thousand other questions.
I almost answered your first writing on this, but while I’m really scared of the things you listed that could occur I’m hoping and praying that they don’t.
Another issue that I HAVE experienced, is an attempt to scam me with an AI bot that was so convincingly human that I could easily believe that people might just fall for it. It seemed to be reading my mind almost, but there was just something “off” about the conversation. Beware!