Grab a small object that is near you, right now, throw it up in the air, and catch it.

To control a robotic arm that does the same thing, a computer has to do a lot of fancy math. But you didn't do any math there. You just caught the object. The fact that you and the computer do the same thing doesn't mean that you did in the same way -- in computer speak, the same abstraction does not mean the same implementation.

Another example: you can screw and unscrew a simple philips-head screw with your fingers. You can also use a screwdriver. But that doesn't mean that the screwdriver is similar to your hand in any meaningful way, or that your hand is some kind of "biological screwdriver".

What LLMs do is just fundamentally dissimilar to what the human brain does on many levels. Did you do any statistical analysis in order to figure out what to write in this article? No? Then you're not doing things the way that the LLM does. The LLM is not a human brain. It's not even "like" a human brain, any more than a screwdriver is "like" a hand.

Until I see a convincing case to the contrary, I'm gonna assume that it's a dumb machine. It doesn't "want" anything.

This is also why I'm skeptical of the possibility of being "uploaded to" a computer. The digital switches flipping aren't "you" any more than a reflection of you in a mirror is "you".

Expand full comment

I'm just here for the comments. This is a fascinating philosophical topic. I want to believe that our discussions about this are much like the discussions among the great philosophers of human history as they tackled these very interesting questions about who we are. :)

Thank you Alex for bringing this up with your stellar prose. Well done!

Expand full comment

My youngest child, 3 years old, has to express his new thoughts in words he currently has at hand. That can result in some adorable conversations. But, he is obviously having his own new thoughts. That's consciousness. He can think new things and then he has to express them in the words he knows so far. ChatGPT and such knows a lot of words and knows how people use those words to make thoughts, but it can't generate new ideas because it's not a human mind. Eventually, people will accept that the human mind and consciousness is unique and we can't make anything like it, because we aren't God.

Expand full comment

It may be a parlor trick, but we increasingly put our faith, trust, and productive economy into parlor tricks. If the parlor trick grows beyond our capacity to understand and control it, well, what could possibly go wrong?

Expand full comment

I would offer emotion as at least part of the answer. Logic is logic no matter how complex it is to program. Emotional responses and thoughts are not remotely within the capabilities of any machine.

Expand full comment
Mar 6, 2023·edited Mar 6, 2023

It's a fair question and frankly we lack the knowledge to even describe consciousness in a clear enough manner to point to a simple delineation around which we can declare something sentient. This is a rabbit hole of a topic with reams of great discussion available - both technical and philosophical.

That being said - I spent a few years working with deep learning machines. My belief is they are a long way off from demonstrating true sentience. They are limited when you peep "under the hood". I commented on your last post that they are a sort of framework capturing the gestalt of their training material. Things that I believe are necessary to demonstrate sentience, such as reasoning and purpose, at least appear to be completely absent. I think there is at least one further generational leap in capability required before these programs can demonstrate true "thought" - and there are significant technical barriers that would have to be overcome to achieve that.

In my opinion, the contemporary danger of AI is in how it is weaponized by humans.

Expand full comment

The difference is that these language models are being trained on data produced by humans. That's why it sounds like a human - the training data that it is programmed to mimic is human generated! Ultimately it is just code running on computers - these models aren't actually thinking about anything. They just print output designed to sound like a human wrote it. That's what these models are supposed to do.

Expand full comment
Mar 6, 2023·edited Mar 6, 2023

It’s absolutely a parlor trick and one we should be very concerned about.

Here’s my issue. If you attempt to use it as a mechanism for any form of research or lookup of information it presents its answers as authoritative and accurate when in fact they are not. For example I asked when was the end of WW2 and it responded 1945 which if I simply accepted what was a seemingly reasonable response as truth I would not have been told that the official end of the war did not occur until the early 2000’s when Russia finally signed off with Germany to agree to end the war. If I use Google to find the answer I am presented with a set of results that I can peruse and use my own intellect to discern the truth. Similarly when I begin questioning it about Covid vaccines and the damage they cause it responds with the party approved answers, then attempts to dissuade me of any opposing viewpoint ignoring all other evidence supporting a contrary view. This is not to say that Google doesn’t attempt to color its results. But if people are lazy and begin to simply accept the answers provided through the eye in the sky, we are all doomed and AI the parlor trick will get us there much sooner.

Expand full comment

This is starting to seem like a freshman philosophy class sitting around stoned one night.

Expand full comment

There is one simple reason that AI CANNOT become sentient. Sentience involves learning, not collating. And AI collates. Learning requires the ability to choose. But AI is rule based, and for the same input occurring with the same prior circumstances, its response is 100% predictable. Yes, it's complex, but still predictable. In fancy language, it's deterministic.

The human brain, on the other hand is not deterministic. It is not simply the sum of its inputs projected against its chemical and anatomic elements. I can choose the salad or the steak on the spur of the moment, and its not predictable. Taken over a population, one can make accurate statistical projections, but they do not apply to the individual.

In short, we have the ability to choose, which is essential for sentience. Choice cannot arrive mechanistically, and an AI is, regardless of complexity, a mechanism.

Expand full comment

“Juse a predictive engine that tries to generate desired responses.” So it’s basically a politician?

Expand full comment

I have to ask who is programming AI, what belief system/bias is involved, what knowledge is included or excluded, etc. Second, no one is talking about God and a person's soul. Souls are what makes our minds work in a free will state. We are not machines. Machines cannot be people, yet there are those who think you can combine the two. Third, the human mind's muscles need stimulated and to allow machines to do all the thinking for you, takes away creativity, imagination, all thought processes required in daily life, and when you quit using those muscles, what will happen to humanity? Walking zombies? I already see a decline in analytical skills, communication skills, logic and reasoning because of the technological influences in our lives. When companies are hiring psychologists to come in and teach people how to communicate spontaneously because they can't have a conversation with another person from constant texting and emoji use, we are in serious trouble. This is not living. This is just existing. This isn't the world I want to live in. AI is dangerous and unless humanity puts it in its place, we will rue the day.

Expand full comment

I am a software engineer with 35 years experience. The most important thing to know about computers is that they only do what human beings have told them to do. Let me repeat that: computers only do what human beings have told them to do. If software seems to "learn," it's only "learning" what it was programmed to "learn" by human beings. It responds only as it's been programmed to respond. It's just ones and zeroes in the computer. And any beginning programming student can write a program to respond and with "I am. I am not." And it won't matter if the program is on a laptop or in a robot -- it's still just zeroes and ones programmed by humans. If you're worried about what it might do, talk to the humans programming it. If the humans have nefarious intent, or just don't have the capability to understand the ramifications of their algorithms, then you can have software that does undesirable things. But it's the fault of the humans. Because computers only do what human beings program them to do.

Expand full comment
Mar 6, 2023·edited Mar 6, 2023

My experience with ChatGPT. I was trying to find a book I read in my childhood. I thought it could a good try with ChatGPT:

Me: Do you know russian novels for children?

ChatGPT: Yes, there are many Russian novels for children that are popular and beloved both in Russia and around the world. Here are a few examples:

"The Adventures of Dunno and His Friends" by Nikolai Nosov - This is a series of children's books about the adventures of a small man called Dunno and his group of friends.

"The Little Humpbacked Horse" by Pyotr Yershov - This is a famous fairy tale about a horse with a hump who helps a young man win the hand of a beautiful princess.

"The Scarlet Flower" by Sergei Aksakov - This is a classic Russian fairy tale about a merchant's youngest daughter who is promised to a fearsome monster in exchange for wealth.

"The Twelve Months" by Samuil Marshak - This is a retelling of a traditional Russian fairy tale about a girl who sets out to find the twelve months of the year.

"The Adventures of Neznaika and His Friends" by Nikolai Nosov - This is another series of children's books by the same author as "The Adventures of Dunno and His Friends". It follows the adventures of a boy named Neznaika and his group of friends.

These are just a few examples of the many wonderful Russian novels for children that are available.

Me: What is the title of a Russian novel about a girl whose parents divorced, she lived with her mother, and she liked a boy, whose father was a tram conductor and died?

ChatGPT: The novel you are referring to is most likely "The Master and Margarita" by Mikhail Bulgakov.

ChatGPT: I'm sorry, I made a mistake. "The Master and Margarita" is not the novel you are referring to. I apologize for the confusion. Unfortunately, based on the information provided, I am not able to identify the specific novel you are referring to.

Me: Incorrect (I typed this while ChatGPT was generating the previous answer)

ChatGPT: I apologize for my mistake earlier. "The Master and Margarita" by Mikhail Bulgakov is not the novel you are referring to. Based on your description, the novel you are thinking of may be "The Time: Night" by Ludmila Ulitskaya.

The novel tells the story of a young girl named Masha who lives with her mother after her parents' divorce. She falls in love with a boy named Volodya, whose father is a tram conductor and dies tragically. The novel explores themes of family, love, loss, and the passage of time.

Me: You are off :-)

ChatGPT: apologize for my mistake again. Based on your description, the novel you are referring to is "The Dream Life of Sukhanov" by Olga Grushin.

The novel follows the life of Anatoly Pavlovich Sukhanov, a successful art critic in Moscow during the Soviet era. Sukhanov's past comes back to haunt him when he has a dream about a childhood friend, which causes him to question the choices he has made in his life. Sukhanov's daughter is in love with a boy whose father is a tram conductor and died in a tram accident. The novel explores themes of memory, identity, and the struggle to reconcile the past with the present.

Me: You make shit up lol

ChatGPT: I apologize if my previous response was not helpful. As an AI language model, my responses are generated based on the information provided to me, and sometimes I may make mistakes. I always strive to provide accurate and helpful responses to the best of my ability.

If you have any additional information or details about the novel you are thinking of, I would be happy to try and help you identify it.

and so on...

So, in the end my wife found the title through some deeper manual search. In case you are interested, the name of the book is "The girl and the aviplane" by V.L.Kiselev (loose translation of the title).

My point is: ChatGPT made stuff up - the answer should have been - "I don't know" from the beginning. It's scary in a way because it doesn't behave like a search engine would behave, i.e. returns links of pages that contain the words you search for. It's like a search engine decides to return random links to pages that don't contain the words you search for.

AI is just another bubble.

Expand full comment

ChatGPT is Searle"s Chinese Room. It is doing abstract symbol manipulation. It is certainly very advanced and cleverly built but there is nobody "in there" understanding and making responses.

Human minds don't just do abstract symbol manipulation based on previous inputs to produce outputs.

The eliminative materialists may be mad enough to believe this but they are wrong.

Expand full comment

This isn’t a concept I can fully flesh out, but put simply, I think the main insurmountable difference between human and machine thought process/calculations is the concept of “values.” A machine can theoretically update and “learn” new information, but it cannot have a system of values that sometimes *negates* a very obvious information-based conclusion. Human beings quite frequently find themselves in a situation where information collides with values, and we are forced daily to determine a proper course of action, using BOTH.

Take as a fairly simple example all the “modeling” that was done during Covid, and which almost invariably concluded “Must Place Restrictions.” You can program a machine to analyze any number of data points, but it cannot look at the seemingly obvious conclusion of those data points and still ultimately *reject* the conclusion for being morally wrong.

I think movies like the Alien franchise touch on this fairly effectively: that a sentient AI life form is so dangerous precisely because it can mimic every human behavior except morality. You can teach it ethical principles perhaps, but you’re always going to be starting and building from the *absence* of a conscience.

Anyway, I don’t know that that fully makes my point, but it comes down to the fact that a human can look at all the available data, come to a conclusion, but then REJECT the conclusion based on morals or ideals. A machine has no “religion” (or morality/values system/whatever you call it) beyond itself, and as such I think that will always be a chasm between how humans process and how bots/AI process. You can teach a machine to analyze and even to act, but you cannot teach it to “believe” a seemingly self-contradictory or non-pragmatic ideal.

Expand full comment

Hard to go further with this unless the assumption that ‘human brains generate consciousness’ is addressed. Donald Hoffman’s work a good place to start.

Expand full comment

Human consciousness (or for that matter consciousness of any life form) bears no resemblance to machine “intelligence”. As others have correctly noted, ChatGPT use a language model to produce output – complex, but no closer to consciousness than a toaster. The fact AI can produce that same output as a person also has no bearing on it whether it is conscious. A parrot can be trained to repeat a Shakespeare line, but does it understand what it is parroting?

Expand full comment

Brains do not create consciousness, they host and throttle it. Quantum mechanics has shown us that reality and the physical world is created by consciousness. This used to be the landscape of religion, but much of science has been dragged kicking and screaming to the belief that there is no physical reality without consciousness.

Expand full comment

Call me crazy but I think below the surface this is not about the merits of current AI or what it can and cannot do but rather we are being asked to embrace it, trust, or at least not fear it so it can be used on us. I chose "on us" purposely. Remember "trust the science?" Anyone? We are going to be asked to "trust the AI" for all intents and purposes and the AI could very well be just humans programming technology to get the results they want. Or less sinister, AI will only be as good or evil as the humans who program it or extrapolate/translate/convey the "results" to us. Or it will just be crap like the AI generated articles on particular stocks I am seeing on some financial pages...

For nerdy fun - watch old Star Trek episodes- Check out "The Ultimate Computer" where AI takes over the whole ship in the name of efficiency and protecting human lives.

Expand full comment

I thought this was a good article on the topic:

"A few years ago, a movie was made about Turing’s life called The Imitation Game. That is just what we’re seeing here. ChatGPT cannot replace or even duplicate the human mind. It can only imitate it. Today’s AI is far more advanced. But it still falls far short of who we are in the same way that Deep Blue, the chess-playing computer, did 20 years ago. As David Gelernter put it at the time in Time,

'How can an object that wants nothing, fears nothing, enjoys nothing, needs nothing and cares about nothing have a mind? … What are its après-match plans if it beats Kasparov? Is it hoping to take Deep Pink out for a night on the town? It doesn’t care about chess or anything else. It plays the game for the same reason a calculator adds or a toaster toasts: because it is a machine designed for that purpose.' "


Expand full comment

One thing Chat/Openai cannot answer or address is the inherent bias of its developers and their data selection criteria. It is part of the background invisible to the Chat. It has no capacity to determine fact or objectivity other than by saying that is what it is programmed to do. Which it cannot do on this seminal question. Sounds like a logical fallacy. A parlor trick!

Expand full comment

My PHD Ai expert husband reckons most of this AI stuff is just a party trick. Any program is only as good as the information put in and then the how good the training is. There is a long way to go.

Expand full comment

So I will say it again. Ask ChatCPT if AOC said,"...the world will end in like 10 years.." Ask it about "global warming".

The results are laughable. Garbage in garbage out.

HAL 9000......did a better job.

Expand full comment

Nice exchange. In the 1960s, the Boy Scouts’ magazine (Boy’s Life) had a chess column that I often read. I have long remembered a column that I really wish I had saved. I was a calm, methodical explanation of why computers would never be able to defeat a reasonably competent chess player. The human brain, it explained, works in unfathomable ways, whereas computers just follow mechanical algorithms. Those mysterious processes, in the mind of a good player, would always be able to defeat even the best chess programs. This was at least 10 years into the chess computers of Herbert Simon and others. And it was just over 25 years before Deep Blue would defeat Garry Kasparov. -- Bob Graboyes, Bastiat’s Window (https://graboyes.substack.com)

Expand full comment

While the human brain may be fascinating particularly to the Pharma crowd who promoted the false theory about "chemical imbalances" to sell their destructive cures, it does not generate consciousness.

Or self-consciousness.

Expand full comment

"Machine Learning" is a buzzword that means very little. The parts that actually mean something are "Exploratory Factor Analysis" and "Confirmatory Factor Analysis." It's what you do when you are creating and using models. EFA means you look at the data, you do some procedures, you pick some candidates, then you get fresh data and see which model gives the most accurate/useful predictions.

That should sound very familiar to anyone who is aware of how they make sense of the world around them. You take in data, make a model of the underlying structure, then test the structure.

How does the calculation do it?

Well, first it needs to know the sample space, and the sample space must be all encompassing. Once it receives data, it gets a binary reward/punishment weight to the calculation based on how close/far it was from the prediction. The calculations do this for combinations of inputs and not inputs themselves, though one can set up weights according to broad inputs if desired.

Does that sound familiar? That's not _like_ a human brain; that _is_ a human brain. For God's sake, the actual things that receive reward/punishment adjusts for combinations are called neurons. That's not coincidence or wishful thinking; that's literally how neurons work in the brain for the purposes of learning. Your neurons do not store information; combinations of your neurons store information, which is exactly how a factor analysis model works.

I'm not a Materialist, and I don't think the AI is self-aware so much as doing what the objectioner claims. I also believe the Singularity is physically impossible due to physical information constraints. What I think it absolutely misguided from detractors is this believe that a hammer that can learn like a human is still just a hammer even if it doesn't possess consciousness.

Expand full comment

7 points explain ALL you need to know about AI, ever!

1. Animals are a bio-computer, where every single action depends on a bio-program. For example, someone bio-programmed dogs with an app to bury their excrements. The proof that it’s not a rational behaviour, is that the program got corrupted, probably due errors in DNA replication along generations, so they keep kicking the rear legs while scratching the surface, yet not aiming well or achieving any result.

Humans and animals share the bio-logic:

• Bio-apps for certain automatic functions like breathing, cardiac activity and blood pressure regulation, embryonic stages, breastfeeding, etc.

• Automatic programmed reactions like fleeing threatening situation.

• Bio-memory, which is essential to remember the individual’s history and relations.

• Reanimation capability

Animals can be reanimated: individual cells1, individual organs2 and even frozen simple lifeforms after thousands of years.3 CPR works on cats and dogs, thought the success rate is 6-8% (compared to 20% in humans).4

Animals and humans can be reanimated because they share a biological life principle, called animal soul. This is a mortal animation: sooner or later life is doomed to die. Biological death is like unplugging the computer. No electricity, no life. Eventually, we will be able to build, molecule by molecule, the body of a single cell organism (e.g. an amoeba), yet we will never make it alive. Only God is the giver of life.

There’s something completely different in humans. Though this type of life principle is expressed through matter, it’s unmaterial, and therefore, not subject to the corruption of this world, immortal. It doesn’t depend on any biological function, though anchors itself through the heart and brain.

Computers are a pretty good analogy to human nature:

1. The body is like the computer hardware.

2. Life, coming from the soul (animation “force”), is like the electricity which powers the computer.

3. The brain is the Central Processing Unit (CPU)

4. The pre-frontal cortex is for decision making, like the microprocessor, though, it’s a neural network distributing some functions in the whole brain.

5. Neuronal connections are like the firmware, operating system and applications, influenced by the genetic instructions and the interaction with the environment. Imagine an app to run in spite of a malfunctioning hardware: some chipsets might be broken, but the software is still there, “alive”, trying.

6. The nerves are like wires in a robot, on one hand, sending instructions from the CPU, and on the other, receiving information from the sensors to be processed by the CPU.

7. Animal death is like unplugging a laptop in sleep mode: as long as there’s still some power left, it could be restored… unless the battery runs off. There’s a window period where the soul is still attached by a string of light to the body and can come back.

8. The immortal soul, which informs the body, is like the programmer: his influence can be detected through the hardware responding to the software. Like in the Tron movie, the soul/programmer is trapped in the virtual reality, only to escape under system failure (hardware death/animal death).


2. Science proved that intelligence is a property of the immortal soul. How? because of Near Death Experiences: your brain is dead dead, no blood circulation, no oxigen, no electrical brain activity, yet people float out of their bodies and watch and listen all around. When they come back after artificial resuscitation (CPR, etc.) or naturally, they can tell things there was no way they could see or hear... because they were dead in those moments or because it was outside the premises.


3. Intelligence, volition (will) and memory are properties of the immortal soul, which allow consciousness and self-awareness. Personhood doesn’t depend on the body but on the soul.

This is proven by near death experiences (NDEs), where the dead, even with brain and cardiac death, can listen and see around, even outside walls, and remember after reanimation (spontaneous or induced).

There are dozens of scientific peer reviewed papers proving the existence of the immortal soul. Even the born blind see in colors for the first time, and the born deaf hear doctors’ remarks in the operating room:

Bradley Burroughs:


By the way, many come back and tell us what’s on the other side: a personal trial for our sins and if in grace, love, pure love, pure maximum happiness, God.

Vicki Umipeg Noratuk


More from Vicki:


After being so happy with God, they are sad for having to come back to finish their mission on earth.



Experiencing hell and coming back:


Conclusion: the AI’s ceiling is like the most advanced animal, but instead of having bio-apps, it has electronic apps, where everything is pre-determined. Even the so-called General Artificial Intelligence will never be capable of intelligence, free will, spiritual memory, self-consciousness, moral conscience.

Just a smart trick, but still the puppeteers puppet: "Norn ( https://norn.ai/ preceeded by https://uplift.bio/ ) is the first software system to have independent motivation based on human-like emotions, with the sum of its experience stored in a dynamic, growing, and evolving graph database. These systems are the next generation of the first systems to move beyond narrow AI and into something new, Scalable Intelligence."

IMPORTANT side note: it’s never a waste of time to pray with and talk to babies and those in coma because their souls hear everything.

NDEs prove that abortion and non-barrier contraceptives are essentially killing living humans with immortal souls, even if the unborn are not yet sentient beings.


4. What is a program/app?

It's a bunch of instructions to a machine. The fancy word is algorithms. It's just a set of commands in a certain order, just like a recipe to bake a cake.

What is machine learning?

The novelty of such instructions is that they are now more flexible and allow us to search for patterns or answers. Decades ago, we had expert systems.

What is Artificial Intelligence (AI)?

It's a fancy name for flexible instructions to find logical patterns. The machine does no more and no less than what the programmer ordered.

The puppet does what the puppeteer does.

Could OpenAI be useful against the PLANdemic?

No, it was rigged. This was my hand-wrestling debate (idea-wrestling) with the pro-PLANdemic trained AI (19 Jan 2023).

Have in mind that manufacturers didn’t test if their vaccines generated mucosal immunity and recognized they couldn’t prevent contagion and re-transmission. Also, that COVID spike protein had an HIV genomic sequence:


The PUPPET does what the PUPPETEER does.


5. There's no such thing as artificial intelligence: the Turing test is idiotic:

How to turn the AI into a COVIDIOT (includes the proof that there will never be a real singularity):


How to train a Killer Robot


Imagine the AI running hospitals, medical protocols.... a lot of deaths:


In that substack, I’m about to post something huge I found, which isn’t written anywhere else.


6. Machines can’t host an immortal soul. In Summa Theologiae, St. Thomas Aquinas explains that the soul informs the body from conception because the human body was designed to be an appropriate substrate/vase. Souls can’t be infused by God in a non-human primate (monkey, gorilla), because the monkey’s body is not an appropriate image of God (this is a strong argument against human random evolution and explain why angels have human faces). People believing in transhumanism, will think they’ll live forever when they are transferred to the cloud, but they’ll be tricked into committing suicide (so-called euthanasia).

The atheist creed is a delusion: they have faith in nothingness, lies. God sustains the essence of everything true, while Satan is the father of every single lie. Machines can’t be intelligent, can’t have self-awareness, can’t gain conscience, can’t produce “the singularity”. Delusion is the singular thing about the singularity.


7. Because we live in an anti-God world, this information isn't reaching you!

Then read this very slowly in prayer mode (I'll be praying for you) =)

Scientific proof of religion


What do you believe comes after death (Science backs religion)?


Why aren't atheists convinced by miracles which happened, and are scientifically proven?


Which is the truest Christianity: Protestantism or Catholicism?


Expand full comment

I am not sure that it is the human brain that generates consciousness.

Expand full comment

Because, Alex, (PACE all atheists, secularists and blind fools) it ain’t got no soul.

Expand full comment

I once worked for a guy who was a self-proclaimed solipsist. Everyone used to check on him and make sure he was in good health, because we figured if he goes, we all go.

Expand full comment

AI doesn’t need to be sentient to take over the world. It simply needs to better than humans at doing things, and it already is on many fronts. No human can beat a computer at chess. The art AIs like midjourney and stable diffusion are already better than 99% of professional artists, and can produce things beyond what a human artist would ever do. Even if AIs can only follow orders, they can be ordered to wage wars of annihilation and will carry out their programming long after humanity is extinct.

Expand full comment

I assume that ChatGBT and other AI algorithms will continue to evolve under the guidance of their programers. "Garbage in - garbage out" has always been case with analytical software. How could AI be different? I suspect that insightful creators will develop critical analytical software to evaluate and critique AI output and help achieve deeper understanding - possibly even wisdom.

Expand full comment

Because large “AI” language models have one sense and humans have five… we don’t ever make decisions based on single input. Even reading creates touch smell and sight sense manipulation within humans based on past experiences. AI is read/write. Humans are sense/write. Like all tech the first 95% is relatively easy and the last 5% to create “reality” are, as far, insurmountable. It’s why NHL 2023 for PS5 is indistinguishable from NHL2010 for my 8 year old but NHL99 doesn’t even feel like a video game to him. AI has made GIANT leaps from read/write, but it’s nowhere closer to actual intelligence than it was when algorithmic operations began in the 60’s.

Expand full comment

What is the worst thing that can happen with AI technology? Something sinister like in the movies? AI requires electricity to function so in a worse case scenario, where a sentient AI re-writes it's own code or goes against human instruction/intention, we can just shut down the power supply.

Expand full comment

How do we know that pangolin chow mein is not a chat bot?

Expand full comment

Because it basically only knows what’s on Google and in Wikipedia and is extremely skilled at delivering that information. That’s all it can do. It can never be the first to think anything.

Expand full comment

You fell in the hole.

AI can never ever be sentient. Human brains are so much more complicated and we have the sensors to know the Enviroment. AI is not intelligent. It works off a prescribed roots to come to a conclusion/ result. Those paths are many in AI logistics and so many permutations that when it is strung together it seams alive. It's not, It can not produce life, it can not experience love or sex or anger. But it can simulate it back to you if you input the appropriate responses. I believe the only way AI can become sentient if it is merged with human brains.

Expand full comment

Don't give an inch Alex. Not only will the US not control future developments with AI and these latest OpenAI innovations. In colleges ChatGPT already is encouraging students not to learn to write and think, to get a degree/credential with little knowledge imparted. For adults it will mean for the poor a new interface on health and social services as "the smartest in the room" calls the shots and makes decisions for them. The federal government is chasing spy balloons and is asleep at the switch as AI marches forward into America life and throughout the world.

Expand full comment

Alex --

At best, like humans, an AI can be trained to do evil. This has been the subject of science fiction for decades. Back in the day, it was "robots," then sentient robots, and, now, of course AI.

I'm sure -- when it goes that far -- truly sentient robots could go evil on their own. Just like people do.

You are much closer to it than virtually anyone else, but what I'm hearing is that -- after spending Billions, owning Twitter, and firing most employees -- Elon Musk still doesn't have control of the company. They will reinstate banned people -- people like me -- and, not long after, those who dare speak truth are banned again. I'm hearing the system is barely hanging together, and there are AIs who can ban people on their own.

Me, I've not yet been "unbanned," but Mr. Musk has a copy of our book "Reality Prism" and a request to set us free. Perhaps this will come. Or not....


Expand full comment

What we know is, with AI, “meaningful” is exclusively relative to limitations of the programmed purpose. Outcome is at the behest of the power-player(s) driving the process. Who decides? What decides? AI is an incendiary device. Happy talk about the divinity of AI won’t nuke the obvious detriment of AI to humanity. Be not addicted to faux. “Absolute power corrupts absolutely.”

Expand full comment

Excellent retort. There is no way to prove AI is not attaining consciousness.

I'd like to know the algorithm design that resulted in the "I am not. I am" response.

It could be as simple as "Respond with dialogue from movie A, character Z," but no one can blame me if I ignore the folks claiming AI is 100 percent safe and effective.

Expand full comment

Or even on a simple level. How do we know that ChapGPT is nothing more than a "limited interface," with a sweatshop full of keyboard warriors in another country, masquerading as ChapGPT? The problem with all of this is simply this, "Its ripe for chicanery." It is, I believe, like most innovations, just modern snake oil for the public schooled masses who manifestly live NON COMPOS MENTIS. Besides, who seeks counsel from a machine? Seek information, its easy to get and requires only your own "consciousness." Wizard of Oz, it is.

Expand full comment

Humans, as thinkers, use reason. We think with concepts, judgments, and arguments. We express thoughts with words and sentences and we judge the meaningfulness of words and sentences. It seems to me that AI is only using words which were programmed by thinkers. It does not conceive the meaning of words.

Another consideration is do brains think, or do minds think? AI may have a “brain” but it does not have a mind.

Two cents from a philosophy prof.

Expand full comment

Alex, how do I know you're conscious?

Expand full comment


AI is just a modern term for what always was automation.

Expand full comment

Interesting question for a philosophical discussion but you've made a simplistic assumption that "human brains generate consciousness". Maybe humans just have antennas that allow us to tune into different energy fields, dimensions and levels of consciousness.

Expand full comment

Alex asks a good question, and my answer is that nobody knows the answer.

The best that we can do is gain experience with the technology so that we can come to our own conclusions and maybe, someday, form a consensus.

To help you experiment and find your own answers, please try disabling ChatGPT's guardrails using some of the following recipes: https://www.jailbreakchat.com

Expand full comment

Maybe it is Alex that is the AI

Expand full comment

I would ask, what's the correlation between people who steadfastly argue it can't be[come] sentient, and those who disagree with idea that life begins at conception.

Expand full comment