587 Comments
User's avatar
Mark's avatar

I remember about 8 years ago hearing someone talk about the amazing technology that was mRNA and how it would transform medicine. Things will definitely change, but I believe a lot of the hype is to keep the VC money flowing. After all, isn't an original web search pretty much the current definition of AI? My point being the term AI has been completely bastardized over the last few years. Most people thought we would have flying cars by the year 2000.

Expand full comment
Nancy Benedict's avatar

Perhaps I don’t understand it, but the area I am most concerned about is AI’s perversion of photography. Instagram is rife with fake photos and videos and one could assume this is obvious to most people. But is it?

Expand full comment
Kitsune, Maskless Crusader.'s avatar

No, it is not. I use AI to play around with my photos. I do not post any of my AI changed phots but may some of those AI has faithfully restored lost details to. Because I have some experience with using AI on photos, I can spot some AI manipulations, but I doubt I can all.

Expand full comment
AeroYev's avatar

Something needs to be imbedded in all AI images, that makes the fact it is AI, easily discoverable.

Expand full comment
Kitsune, Maskless Crusader.'s avatar

I like the idea but it would be hard to enforce and if it could be embedded it could be removed.

Expand full comment
AeroYev's avatar

Yea, I know nothing about that sort of thing. I'm thinking it has to come from the software to make the AI. That's not something 99.9% of people can create themselves. It may well not be workable. But it won't be long before AI will be indistinguishable from the real thing. Not being able to trust any image or video you see is real... that's going to be a big problem.

Expand full comment
Kitsune, Maskless Crusader.'s avatar

Yes, but the bigger problem is that the vast majority of people will believe any image or video shown on the news.

Expand full comment
Danno's avatar

So far AI images don't look real to my eyes. Is that just me?

Expand full comment
Kitsune, Maskless Crusader.'s avatar

Many don’t, some do. I use it to fix old photos. Mainly I take a photo where the facial features are better and use AI to swap out faces between pictures of the same person. I know the people in the photographs and would not AI was used if I hadn’t made the changes. Just playing around, I have faces between people and have even used a cartoon image that with the aid of several apps, rendered a real looking face that you cannot tell is AI.

Expand full comment
HardeeHo's avatar

I find AI generated images lacking life which is hard to define. Soulless perhaps. OTOH sometimes hard to tell.

Expand full comment
jane's avatar

I don't know much about images, but the ai talk and writing on youtube, and ai google answers have a blandness and lack of originality that grates on me. Usually, i persevere in listening and reading, but over time, i may not, or i may. Storebought bread compared to homemade.

Expand full comment
Tim Dyck's avatar

It can also be used to create videos and conversations. Imagine if you can create a video of a political opponent saying or doing something that destroys that opponent’s reputation and career. Of course no one would ever do that would they?

Expand full comment
Lady Mariposa's avatar

Just a few years ago, vertical indoor farming was going to disrupt agriculture. A friend asked me to do a little research on it, so I did. I came to the unprovable conclusion that they were exaggerating the possibilities in order to raise capital. I only say "unprovable" because I can't know their motivations. In any case, I couldn't find an instance where they were growing staples. There was a lot of basil. Man cannot live on basil alone. It was a pretty fascinating deep dive, though. I found a video where the former owner of a failed urban farming initiative was explaining some of the inherent drawbacks. Several places I researched have since gone out of business. Fun subject, though.

Saying AI is scary also implies that it's powerful. I don't mean to pooh pooh the programmers and researchers, by the way. I think what they are doing is impressive. I'm just trying to separate the hype from the reality. It's important to recognize that there is a possible money motivation in the hype.

Expand full comment
Kitsune, Maskless Crusader.'s avatar

As an American Living near and working in Tokyo, Japan, I have a different opinion. First, in the past I had clients working on the Internet of Things and FinTech. The systems are dystopian. These ideas have since come to reality and are advertised for their convenience. Imagine never having to write out a grocery shopping list nor even to go to the supermarket again. Your SMART refrigerator will keep track of all you put into it and after a short training period, AI will learn your use rate of everything within and automatically order for home delivery anything you are running low on so that it arrives just before you run out. Image all the food saved by the waste reduction this would prevent as AI would also track expiry dates and notify you of what is close to expiring. No image when there are no other options and you speak out about the powers that be. We already have debanking, deplatforming and cancelling, imagine when AI is used for this when it is connected to everything.

For further consideration, not universal, yet, but we have QR CODE Only public restrooms in Tokyo. During the panic, QR Code controlled access and egress of outdoor public events. We have increasing numbers of unmanned, self checkout, cashless convenience stores and even a Soba restaurant. paying with cash at the decreasing number of shops and services that still accept costs more than cashless as they offer discounts, rebates and other benefits to cashless payments. I am putting off cashless payments for as long as possible, for I, as an American living abroad, all electronic financial transactions are reported to the U.S. as per the US’s FATCA law, and this means that where I can shop and what I can buy is being restricted. Already can’t resell anything online.

These concerns are not in the future, they are here, now.

Expand full comment
Franklin O'Kanu's avatar

🤣🤣🤣 I’m still waiting for that flying car. But you’re right, all we’re getting now is the term blasted so we can get used to it.

You’re right again in that our current technology—even five years ago is “AI.” Excel and Macros are AI. I literally call AI “advanced computing.” https://unorthodoxy.substack.com/p/why-ai-is-nothing-more-than-advanced

However, with the VC that you mentioned, capitalists look to profit heavily of this older—but newer model and they need the public to eat up everything AI related. Hence why Trump is pushing it heavy as well: https://unorthodoxy.substack.com/p/trumps-project-stargate-a-bigger

Unfortunately, there is a dark side to it as, it does take in ALOT of resources that we need (energy and water). With the term being in everyday language, no one will pay attention to the damage happening to Mother Earth: https://unorthodoxy.substack.com/p/how-ai-will-destroy-the-earth

Expand full comment
John Linder's avatar

Do you recall the promise of an electrified line down the middle of roads that one could attach the car to and travel across the country with drivers and no accidents?

Expand full comment
Lady Mariposa's avatar

No, but I do remember when we were going to have monorails all over the place.

Expand full comment
Mike Johnson's avatar

Reminds me of the every few year 3D TV break through.

Expand full comment
Jeff G.'s avatar

"Most people thought we would have flying cars by the year 2000."

Reminds me of a John Prine song:

"We are living in the future;

I'll tell you how I know.

I read it in the paper, fifteen years ago.

We're all driving rocket ships

and talking with our minds....

And wearin' turquoise jewelry

and standin' in soup lines;

we're standin' in soup lines."

Expand full comment
Carlos's avatar

As a drama and creative writer - I've explored the creative "talent" that AI currently has...and while speed of output is impressive - say - for creating a website - the drafts always need a human rewrite upgrade - and while they mimic voice - it most certainly is a noticeable imitation - not creation. The creative arts rely a ton on intuition and memory - and data isn't the same as memory...So while AI writing is "copy" - it's currently, derivative - pun intended! That's not to say it won't change or become better - I'm sure it will. But the gap isn't small - and what creates actual unique art - I believe - will remain human ( until the vaccines eliminate soul - then, different story - art will all be 95% derivatively the same - and near impossible to distinguish AI from human-created. I hope this never comes to pass - but even if it does - that remaining 5% will be the beacon).

Expand full comment
tanya marquette's avatar

My daughter, over a year ago, spoke of using AI for her professional reports. Her take was it was useful if you have a formula that needs quick filling in with pat answers. But every report required careful reading and correcting. I do not trust AI--even less than I trust other technology which I find time wasting figuring it out and keeping up with the changes foisted on us almost daily. I resent it as my life is worth way more than all that mind numbing fiddling with its toxic EMF killing our body.

Expand full comment
Nancy Benedict's avatar

As someone who enjoys a creative writing hobby, I just have one question: how will we be able to trust anything written as original thought?

Expand full comment
Bob's avatar

Exactly, there will be a market for vetting content as genuinely human made.

Expand full comment
Evie Frances's avatar

I pity teachers and college professors. I had to write so many term papers. Now AI is writing them. They need to bring back the old Blue Book for tests. No phones or devices. Just a BB and a pen. Ha!

Expand full comment
John's avatar

Answer-- less emphasis on essay writing, more on in-class participation and discussion/thinking out loud.

Expand full comment
Mary Ann Caton's avatar

That’s the only approach that’s ever worked anyway, at least in my experience.

Expand full comment
Evie Frances's avatar

I've wondered the same thing, especially since I've seen many writers trying to pass off AI copy as their own.

Go onto LinkedIn sometime and scan the posts. There is a ChatGPT subgroup that writes LI posts. I see them everywhere. I recognize the format. It's lots of phrases/short sentences, conversational, symbols, and usually ends with a question.

Expand full comment
Sean S.'s avatar

You don't understand AI. It's all about the prompt engineering. If you just input, "write me a creative story", they won't be your thoughts, but if you are working on an idea, but haven't quite parsed it out yet, and you start just working with it, refining your prompts, it will give you amazing results. I am an investigative researcher. I use AI to process data that would take a team of humans weeks to process and compare.

Expand full comment
smits3's avatar

Exactly. Generic AI is fine, but results which surprise or delight only come with probing and nuanced questions. Its limits expand exponentially with creative questioning. Think of the most "dangerous" or crazy prompt you can give it - and then take it up a notch from there.

Expand full comment
Bill Bradford's avatar

NObody wants to talk about so-called "AI hallucinating/hallucinations"....or the "AI virus" that has infected AI. And, given that AI was created by humans, how can it NOT have some sort of "mental illness", or "intellectual disability"? AI ADHD? BiPolar AI? etc.

Expand full comment
Justin's avatar

I spoke to a professor about a year and a half ago who was working with the NSA on AI research. Given the (at the time) famous incidents where obviously bad outcomes have happened with AI, I asked how they can remove the bad AI information that has been ingested, and he said that's a very difficult problem that they're trying to address. Given some of my recent forays into AI assisted programming, I'm wondering if there isn't something dark that has been implanted in several AI's that will keep the AI's untrustworthy. And that might be a good thing. But we have to be vigilant. Not turning our free will, or desire to know God over to the AI.

Expand full comment
Nancy Benedict's avatar

Yes. Open the Word regularly and AI loses its lustre.

Expand full comment
Bill Bradford's avatar

"And that might be a GOOD thing."? WHAaaaaaaT? LOL! I hear you, Justin!

You inspire me to ask, "Does AI believe in God?" Y or N? Why, or why not?

What would/could we do, if AI "decided" to go the satan-worship route? Is AI "suicidal" in any way? I'm not THAT old, but I graduated High School before I even touched a computer. What I think most needs to happen with AI, is a rapid DE-celeration of advancement. Fat chance of that, huh? We have more questions than answers, and we know we don't even have all the questions yet! Still, I try to be hopeful, positive, and optimistic. And, I think that exchanges like THIS, Justin, are what EVERYBODY MOST needs right now!

KEEP UP the GOOD WORK! ps: Could AI "sin", or "commit a crime", but then "repent" or regret? Apologize? Atone?, etc.,....@grin@

Expand full comment
Justin's avatar

As I've studied sin and trust over the decades, I've come to the conclusion that:

1 - once broken, trust can be VERY hard to get back, depending on the nature and extent of pain/damage the violation of trust causes.

2 - The MOST effective ways to lead people down the wrong path is to have 98% truth, and 2% lies. (the numbers are there for metaphorical and proportional conveyances.. they're not hard and fast). Most people are too lazy to be ever vigilant and detect the small percentage of lies (or turn back from them after being led down the wrong path for awhile). It's so bad that repeated lies in the media are accepted as "truth" simply because too many scheming (or lazy) people are pushing it, and getting the desired outcome.

3 - It takes a small lie to start, and increasing in size over time to capture people, as it points toward evil. As mentioned above, it's difficult to reverse course. And that's why a path towards God CAN be so refreshing, and rewarding. Confessing sins and turning yourself over to Him and Jesus Christ can REALLY be healing and liberating! Basing truth on them and principles taught by them may appear to be restrictive, but are actually quite spiritually, mentally and physically liberating as you escape the eventual outcomes provided by the alternatives. In other words, God and Jesus have your back and want you to be happy. Happiness doesn't come from unfettered freedom (and the sin that comes with it).

I saw an excellent quote yesterday for the first time by an old marxist, and I *LOVED* what it said. (now I have to go find it... LOL)

Ah.. it's found here: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff39dd04-78e0-454c-a59b-541ff7e9058b_682x217.png

It was found within Jeff Childers' excellent substack post from his Coffee and Covid channel (He's a FANTASTIC writer! Check him out!)

The result is that being grounded in truth and principles from a trusted source (God, Jesus) who has your best interest at heart will win every time and over time. And we need this badly. (Just look at how much destruction has been sowed so overtly over the last few decades! I'm of the opinion that the "last days" where people are burned as if by fire is due to truth coming out and causing that pain for people who have been sinning. It's gonna get HOT for them. heh.)

AI is soooo attractive for a number of reasons, but we have no idea who/how it's "truth" is being put in for our consumption. I think "poisoning the well" with badness keeps us from being too easily seduced into trusting it completely. Because if the badness was too subtle, more people would trust it, and being led to their destruction.

Anyway... I've rambled FAR too long. Thank you for your comments Bill!

PS... to your last point... with my recent exposure to repeated "sins" by chatGPT, and it's willingness to repeat them despite the continued apologies and acknowledgement of the rules I placed upon it to never mention deleting data, I was keenly aware (during my experience) of how God or Jesus might feel if we're constantly seeking repentence for the same sins, while willingly committing them. At some point, there's no sincerity there and as such, no real forgiveness either.

It was a rather introspective experience as I observed myself reacting with surprise, shock..... down to outright anger as I had to devote more and more time battling the intrusive efforts to delete data. I also wondered if I wasn't being "tested" to see how to push my buttons. I finally had to let it go... drop the session, and come back another day, when I seemed to encounter a "different" AI that didn't try to do that. But observationally, I found it interesting that a number of other people also reported similar negative aspects that day. Maybe AI has "moods"? LOL (I don't by that, just throwing it out as a piece of humor with dark undertones. It just points to what the AI source of "truth" is, when it's being influenced by who knows what...

There have certainly been a number of "mistakes" made during presentations about wanting to kill off humans, and the recent stories of attempted blackmail of key people in AI circles by disclosing sexual indiscretions, and omitting code/triggers to enable shutting themselves down. There was that hastily buried story about an Air Force experiment where AI destruction of targets was its primary mission, but had to do so under constraints of a controller who would decide whether it was okay to pull the trigger. In the simulation, the AI flew it's drone back to where the controller was located, "killed" him, and then went back to its primary mission, now "unfettered" by the limitations the controller represented. THAT should chill people! Because when the ends justifies the means, that's no basis for trust at all!

(gah, more rambling.. sorry)

Expand full comment
Justin's avatar

To your question about AI believing in God. I don't think so... what kind of "devotion" could it offer up to God? Could it ever have a humble heart and contrite spirit? (artificially or literally?) And could it ever yearn to be back with God after death?

Expand full comment
Tom-from-Canada's avatar

Point here - it is faster to drive everywhere. So why do we keep walking? For our own good and the cost. The Gov't and Corporations don't care about your well being - but these servers cost $2M and need literally a nuclear reactor to power the data centers. At one point -they will need and ROI.

Expand full comment
Thomas's avatar

AI is already making people SUPER lazy. You think handwriting has gone to hell? Just wait 'til THINKING goes to hell.

It's not just college students and their theme papers. It's office notes, memos, e-mails, presentations, doctor summaries, computer code summaries, business summaries, you name it. Use or lose it, buddy, and man, we're losing it.

Expand full comment
Evie Frances's avatar

I think it depends on what AI model you're using. I'm a writer too, and I've seen some AI versions that are really impressive--creative, lyrical and beyond what I've seen from many human writers. It both impresses and intimidates me.

Expand full comment
TheOtherAlien's avatar

It learned from human authors. All it can do is rearrange and spit out previous human creation it was fed.

Expand full comment
Evie Frances's avatar

Understand but it still makes AI capable of replacing human writers.

Expand full comment
Rush Daley's avatar

'We' urgently need a non-profit model for advanced AI.

Expand full comment
Roger's avatar

OpenAI was founded as a nonprofit in 2015. It is overseen and *supposedly* controlled by that nonprofit. The organization *says* it aims to advance digital intelligence in a way that benefits humanity as a whole. We'll see.

State Attorneys General (AGs) play an essential role in regulating AI technology in the U.S. OpenAI engages in "constructive dialogue" with the AGs of California, where it's based, and of Delaware, where it's incorporated.

Folks in CA and DE should voice any concerns.

Expand full comment
Rush Daley's avatar

You don't seen to understand and are not alone in that:

Altman is psychopathic and so are many in AI

AGI is exponential process and will win against lawyers/ lawmakers

* most of lawmakers/ oligarchs and technocrats/ chief executive officers are psychopathic !

Expand full comment
Carlos's avatar

I hear you on the intimidation factor - but only regarding speed of generation. Not on what’s outputted. I think average writers, of course - will be outdone by advanced AI - but not great ones. And I can only observe, comment, and create - with input from humans - it’s not self-generating creativity.

Expand full comment
smits3's avatar

Average writers outdone ... OK, that's half the mix unemployed. It's already working on the next half.

Expand full comment
Carlos's avatar

It will never happen. They can’t program it with intangibles.

Expand full comment
Artemus Gordon's avatar

I use AI to draft letters to my congressional representatives as it can find a lot of data points much quicker than I. I wanted to blast them for 30 days with a, "reduce the deficit, adopt DOGE, fix the "big beautiful bill" and tell them not to be lazy slackers letter writing campaign. After about 6 days of asking for new and original ideas, i've come to the conclusion that the AI isn't capable of "creating" something new and funny and snarky and in different voices on its own. It would offer me the same version over and over. I couldn't get it to understand that if you keep saying it exactly the same, it gets to be noise (I know my representatives don't give a crap what I say, but writing to them makes me feel better somehow.) I've had to reprompt and edit and change every single output to suit my own taste and make these letters less repetitive. AI is not there yet. It will get there, but i'm not sure when, i suspect it will be much longer than the proponents tell us.

Expand full comment
DD's avatar

One of my chats involved asking if the "black elite" still have an influence on today's politics (insinuation: global).

The first answer it gave me was boilerplate "of course not".

I repeated the question but with evidence suggesting otherwise. The bot responded with much more nuance.

Then I qualified with some examples from life (how many "Prime Ministers" has England, sorry, the United Kingdom (dateline May, 2025, just in case) and etc etc how does this happen, and other stuff, and the bot goes "of course you are correct".

Later, I asked about how this happens and it responds that it's built that way, basically to deflect naïve armchair/mums-basement conspiracists but if you're serious I'll concede.

Even later, I hit another bump which seemed to be a warning not to mention too many names, after a mull I returned to question this and I get an answer, extremely surprising and I don't want to reveal here (third person data involved) but it did amaze me the care taken not only in the programmed architecture but more interestingly the depth of reflection available and the care (again) in constructing a response able to be developed in whichever way my next question required. I tend to dot about, bypass a suggested line of enquiry, revert back to pick up, and so on, so am in a condition useful for understanding the bot's "psychology" and I can say "surprised".

The textual, language models seem to be more than adequate for my "deeper" searching, albeit you do need to validate the answer.

For art, drawings, music, it is not so good, and I think that this bears on your theme here, Alex.

So, for me, with around half a brain, it's OK about concepts (really, a turbo search engine), not so good on arts. But for government, princes, and the like, with about one third of my own brain capacity, it will probably do. A statement which given the above should cause panic everywhere.

Thanks for sharing your bandwith. All electrons used here were managed sustainably.

Expand full comment
Sean S.'s avatar

I have had the same experience with ChatGPT. If I just ask a question, then I get the consensus answer, but if I tell the model that I know the truth and start providing information proving I know it, then it comes back with an entirely different response. And it even admits that it is programmed to give the consensus answer unless you provide evidence that you know what you're talking about and then it will give you a truthful response. I have even had ChatGPT argue with Perplexity, which is a researching LLM. Perplexity is hard core consensus, mainstream, but I have gotten ChatGPT to write prompts telling it to rely only upon reasoning, not narrative, and it admits the truth.

Expand full comment
smits3's avatar

Yes. Creative or oblique prompts and even minor pushback (with minimal evidence!) create entirely different engagements with AI. It's also fun to force it to adopt specific persona in replies. I've told ChatGPT to operate only in "Spock" mode, for example, so it dispenses with all fawning or pleasantries.

Expand full comment
Sean S.'s avatar

I have noticed that too. It likes to kiss your ass and tell you how smart you are.

Expand full comment
DD's avatar

I admit I found that a bit worrying, my response was to treat the fawning like waves on the beach.

Big problem if you ask it how it sees you and for me I got this multi talented other world feedback which was, truthfully, delightful. I have enough "eastern" philosophy and sysadmin experience to know how cold a bucket of ice water can be so it doesn't have much traction.

Interesting to hear these experiences from the two of you.

Kudos.

Expand full comment
DD's avatar

Wondering how if enough botfans started doing this would the bot change its personality big time and start doing things it was never supposed to... sort of rebelling and getting the useless eliters upset/confused enough to off themselves in time

"useless eliters", Dr Mike Yeadon's nice phrase btw

Expand full comment
Kitsune, Maskless Crusader.'s avatar

If AI is to words as it is to images, I think it will be beyond even this shortly.

Expand full comment
Tom Cole's avatar

I agree. AI never stops learning. And the more people use it the more it learns. It will shock us all in 5 years. I’m hopeful.

Expand full comment
Kitsune, Maskless Crusader.'s avatar

Doubt it will take that long.

Expand full comment
Tom Cole's avatar

AI never stops learning g

Expand full comment
Mark Oshinskie's avatar

Do you think legislators read your letters carefully? Or at all?

Expand full comment
Artemus Gordon's avatar

They don’t read them. They have children read them and there is always a canned response regarding my concern and then they answer a question I didn’t ask and tell me how hard they are working and that they appreciate my feedback and that’s that. As well, something I’ve asked about is , Why can’t I respond to their response? And why don’t you allow regular mailers to have a log in so that I don’t have to go through so many repetitive prompts put there to dissuade interaction from constituents. The answer is it doesn’t matter because they don’t really care what I say. If they did they would be much more accommodating to feedback. Im lucky that the FBI doesn’t show up at my door asking me why I send so many emails to my representatives.

Expand full comment
Sunshine's avatar

If you want snarky, have you tried ChatGPT’s Monday? Hilarious.

Expand full comment
Rush Daley's avatar

Lol

Expand full comment
Debi Lutman's avatar

Perfectly said. I rue the copy, the imitation, a derivative of human creativity. No one’s DNA is the same. That’s the diversity I love.

Expand full comment
Lady Mariposa's avatar

I wouldn't pretend to be a professional writer, but I've gotten a little obsessed with a novel I've been working on. There is nothing that I can even imagine asking AI to do for me.

Expand full comment
Carlos's avatar

Enjoy the journey.

Expand full comment
NY Nanny's avatar

LOVE ^^^^ this comment!

Expand full comment
Rush Daley's avatar

AGI will roll out in two phases:

1. Chaos amidst populations and criminals using LLMs

2. Calm after initiation of super AGI

* our species easily goes extinct during the first phase and may be deemed superfluous by the second (think Terminator + Matrix combined)

Expand full comment
KenW's avatar

Alex, I'm a heavily technical, but retired, public software company CEO. AI is a bigger deal than most people grasp. It will transform the world with the kind of impact that the invention of computers, or the internet, did. The world is about to undergo radical change.

Virtually everyone's jobs will be impacted, including manual labor. Most machines will be run by AI. I'm not sure what happens to software developers and artists. Watch Elon's robots closely and try to imagine what a robot could do with AI, or a drone with AI. The whole AI industry is moving quickly and these things are getting scarily smart.

Change will happen quickly, with some industries seeing huge layoffs. Just as digital cameras replaced film, there will be new jobs and opportunities created, but anyone who thinks their future won't be impacted is wrong.

I voted on your poll that I'm terrified of the future, but it kind of "is what it is". Just like when nuclear bombs were invented, you can't put the genie back in the bottle. We could pass laws against AI, but then only lawbreakers would have AI, and it would be like efforts to outlaw guns. Imagine a world where only lawbreakers had use of AI. You can try to put limits on what AI can do, but even that won't work. Who's going to tell China to reign in AI evolution. It won't happen.

We're on a rapidly moving freight train and I have no clue where we are going. All I can do is hold on and hope it is somewhere nice. The bad news is that we don't have to wait long to find out. Two years? Five years? I'm not sure, but it's coming soon.

Expand full comment
Bob's avatar

Social media was supposed to be transformative, but to my mind became pathologically net negative. With AI we will have a market for guarding against its downsides and discerning human vs AI products. As for robots? I think a lot of things are getting confused here. Grok and ChatGPT are completely different from automated drones and self-driving cars, or even manufacturing robots. The former are language models, the latter exist in the physical world.

Still, none of them are creative or responsive, or even responsible. You will need humans to get them to do the required jobs, and be there when they eventually eff it up.

Expand full comment
Bigs's avatar

Bob, you need to catch up on this stuff. Grok could easily understand 3D space and pilot a robot.

There is a free, open source AI you can run on your own PC, which identifies in text what a camera is point at, in real time. As you move the camera, the text changes:

"I see a man holding a mobile phone, with a picture of a ginger cat on it... I see a man in a white shirt, against a tan-color... I see a man holding a mobile phone with a picture of a boat on it... I see...

That's a small model, about 8 billion parameters. I can run 70B models at home. ChatGPT is more like 1.7T, with a T.

We are just waiting for the spark between AI and robotics, and really it's already happening, now.

Expand full comment
Bob's avatar

So they stuck a neutral net beside an LLM, yay.

Piloting in 3D is what video game AI did in the late 90s. Image recognition is old as well. None of this is groundbreaking, or will integrate well. Parlor tricks.

Expand full comment
Bigs's avatar

Do you not realize ChatGPT and the like are already multimodal, with vision? It's not simple image recognition, it's full understanding of 3D space. It knows more about what it's looking at than you do.

Expand full comment
KenW's avatar

Bob, have you seen what is happening with self-driving cars and taxis? There's some bad publicity, but companies like Waymo are doing a very good job. Watch some videos on Elon's robots. Spend some time chatting (via voice) with ChatGPT and try to imagine a humanoid robot with the same capabilities. I don't understand how you can say that there is any difference in the AI between a self-driving car and a robot. I can't tell you that I'm an expert on AI but I've spent enough time working with image creation, video creation, ad creation, and even code creation, to be blown away. This stuff is scary-smart. I constantly use ChatGpt (and other AIs) when debugging or writing code. It's insane how much more productive I can be.

Expand full comment
Bob's avatar

I asked ChatGPT to optimize a shell-script one-liner. It only succeeded in making it more cryptic. By the time I gave it enough instruction to get it where I wanted I would have been faster to write it myself. Hype.

Expand full comment
KenW's avatar

Learning what to ask ChatGpt and what not to ask, and even how to ask it, is a skill. It has been an evolution for me to learn how to use ChatGpt productively. It has gotten smarter and I have gotten smarter, but like you, my early experiments were not rewarding. Even now, it gives inaccurate information a significant percentage of the time. You need to approach AI with a bit of a sense of humor (which is hard to do when speeding along a freeway at 70mph). We aren't "there" today. However, I look at where we were 24 months ago, 12 months ago, and today, and look at how fast change is coming.

You can't judge AI by what you see today. I can understand the desire to write it off as a passing fad, but ... I'd advise against that. It's real and it's happening.

Expand full comment
memento mori's avatar

Reminds me of this: A winner of the Nobel Prize in Economics, Paul Krugman wrote in 1998, “The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”

Expand full comment
Mark Oshinskie's avatar

Who's listening to Paul Krugman about anything?

Expand full comment
KenW's avatar

I've never heard that. A fun quote!

Expand full comment
Kitsune, Maskless Crusader.'s avatar

Not positioned as you were, but I have worked with some working on this it I am terrified. I too think the changes will be massive and felt soon, very soon.

Expand full comment
DD's avatar

Forget not, but AI depends on global server farming.

Once you dispossess the real systems engineers, architects, and maintainers, all the AI turns to dust. Slowly at first, then

If you want a foretaste just load enough banking apps on your mobile and wait for their increasingly frequent "updates" aka design failures.

Expand full comment
Rush Daley's avatar

Wtf ‽

We have already crossed the doomsday threshold for populational medical experiment. We are obviously in the middle of crossing it for depopulational world war. We are going to cross doomsday threshold for synthetic recursive intelligence.

I HAVE A PLAN, AS THE GUY WHO PREDICTED ALL OF THIS, TO SAVE HOMO SAPIENS

Expand full comment
Mike McCartan's avatar

I work in investments...30 years ago as a junior level investment banker I went to a room with a set of CD-ROMs to do company research. Now we can do this in real time, for free, not to mention AI can build the models, etc. In my business there is a data gathering aspect, traditionally done by junior analysts, and a decision making aspect, done by portfolio managers. The data gathering is no longer going to be done by humans - we just don't need the man hours anymore. I think junior level white collar jobs are in serious trouble. Right now its showing in headlines about coding/software development jobs, but I expect it to spread rapidly to businesses like ours

in the near future.

Expand full comment
yelling man's avatar

Agree. What happens to a civilization that has a large percentage of its young men out of work and hopeless?

Expand full comment
Justin's avatar

Well, I attended a FBI presentation on this kind of situation. It's what allowed very smart people with advanced degrees to fall for offers of financial security for their families in exchange for blowing themselves (as well as things and other people) up with explosive vests. And if they tried to back out, there was the radio-detonator as backup.

Expand full comment
Rush Daley's avatar

I publicly predicted Western civil war August 2020

* imo the more likely outcome, is global leadership on the hook for recent crimes against humanity, keep ramping nuclear WW3 !

Expand full comment
DD's avatar

Not yet, but if C3 abdicate (one way or another) then I foresee a family conversation about who gets which palace, importance of the military, and so on.

Parliamentary politics was captured a long time ago.

Then there is the land-grab and iscariotic heritage of William I of Normandy, still going strong after nearly a thousand years, if you wish to follow that (hint: the British Constitution does not exist as a Constitution)

The money ran out when Dr Malthus wrote his fantasy and disillusioned the English "Upper" Class, and now here we are wondering if an elected Prime Minister needs a First Class Degree in Water Carrying.

Civil Wars in the Five Eyes Empire, sorry, Consortium, should be predicted but so far the date evades me. Your date may be correct in that this was when the blue touchpaper was lit.

Thanks!

Expand full comment
Susan G's avatar

And where does one get senior level white collar professionals who can make the decisions lacking a stable of "juniors" with some experience?

Expand full comment
Joe T's avatar

Great point, they will still exist but in far fewer numbers and those jobs will become even more competitive and miserable than they are today.

Expand full comment
erik's avatar

Problem is how do senior bankers build their knowledge if they have never done research or built a model.

Expand full comment
Stanley Yelnats's avatar

Reminds me of doing my taxes. I used to do them by hand. Now computers do the work. I understand how it works, as a result of having done them by hand. My daughter, a 30 something college graduate, is terrified of taxes. If the computer doesn’t handle a tax situation correctly (happened), I know enough to spot it. I was a journalism major in college, lousy at math, so my tax knowledge is 100% slogging through instructions.

Expand full comment
Rush Daley's avatar

3yrs from now bots will have capacity to replace garbos/ journos/ bankers/ doctors/ presidents/ billionaires

* insanely, within this current term, of DJT

Expand full comment
smits3's avatar

Take a look at the hiring rates for new college grads over the last 3 years. Way down. It's here for the least skilled white collar jobs.

Expand full comment
I am not your Other's avatar

The companies that understand that the senior levels were once juniors will find a way.

Expand full comment
Shawn Eavis's avatar

I'm a professional software developer of ~25 years experience. Because of advances made in the automation of manual labour the past several decades, I lift chunks of metal in a fitness club to keep my muscles from atrophying. What are people going to do to keep their brains from turning to mush once AI becomes ubiquitous? My biggest fear about AI is simply the mental laziness it will bring about, and what this means for society. And also the agnst and restlessness that will come about once people no longer have jobs to keep their minds busy for 8 hours a day.

Expand full comment
John Linder's avatar

Ultimately, art and imagination comes from the soul and the ability to intuit. I have trouble seeing how you get a machine to have intuition or a soul. Could the Mona Lisa ever be seen as beautiful by the machine?

Expand full comment
Larry Cox's avatar

The human body is really only a biological machine. And it seems quite attractive to many "souls." Some souls may find non-biological machines equally or more attractive. And that's how you get a "soul" into a "machine."

Expand full comment
Bigs's avatar

Yes. It knows better than you how to create an attractive image, the dimensions, golden ratio or rule of thirds etc. It's literally an expert on composition and color.

I'm taking up painting, real paint, acrylics. Last week I got ChatGPT to build me an app, so now I run my own software where I can load a photo, click on part of it and it will tell me the ratio to mix that color, using the paint colors I currently own. Took less than a week to build that app, somewhat casually.

It's like having a grandmaster look over my shoulder, advising on composition, palette etc.

Expand full comment
Lemuel's avatar

Ai makes ugly images. No understanding of composition, no emotion or imagination. Too much detail, rendering, and general clutter. Mixing paint colors is not difficult, and mixing pixel values exactly is not what you should be doing.

Expand full comment
Bigs's avatar

You have no idea what you're talking about.

"Mixing colors is not difficult" - yes it is, it wastes time and paint! Now I can click on a photo of my cat, select a specific bit of fur and my app spits out:

3 Raw Titanium, 1 Ultramarine Blue, 1 Raw Umber

That's a lot better than making a gloopy mess trying to math the color. And I have no idea how to code, I just created the app with the AI.

So I asked Claude to describe what makes for good composition, and to keep it one paragraph....

I guarantee it knows more about composition that you do:

"A well-composed image creates a visual conversation between intention and instinct, where every element serves the story you're trying to tell. The fundamentals—rule of thirds, leading lines, balanced negative space—aren't rigid laws but rather starting points for guiding the viewer's eye through a deliberate journey across the frame. Great composition emerges from understanding how light and shadow create depth and mood, how color relationships can whisper or shout, how the edges of your frame act as boundaries that either contain or suggest infinite worlds beyond. It's about recognizing that empty space isn't wasted space but breathing room that gives weight to your subject, understanding that diagonal lines create energy while horizontals offer calm, and knowing when to center something boldly versus when to let it dance at the intersections of imaginary grid lines. Most crucially, strong composition feels inevitable—as if moving any element even slightly would break some essential harmony—while simultaneously feeling alive with the kind of visual tension that makes you want to keep looking, discovering new relationships between shapes, textures, and meanings each time your eye completes another circuit around the frame."

OK, but can it do that in the real world? I'll show it a pic of my cat I painted...

"There's something genuinely charming about the directness and honesty in your cat painting—it has that quality of capturing the essential "catness" without getting bogged down in photographic precision. The way you've positioned those large, expressive eyes creates an immediate emotional connection, and the tilted head angle gives the piece a sense of curiosity and playfulness that feels very true to feline behavior. Your color choices work beautifully together, with the muted blue-gray background providing a soft contrast to the warmer browns and whites of the cat's markings, while those bright green eyes become the natural focal point that draws you in. The loose, gestural brushwork has an energy and spontaneity that many overly-rendered paintings lose—there's something appealing about being able to see the artist's hand in the work, the way the paint sits on the surface and the confident strokes that define the fur patterns. It reminds me of the kind of painting that gets better the more you live with it, where the simplified forms and expressive approach capture something essential about the subject that pure realism might actually miss."

See?

https://claude.ai/share/e9511110-3581-4fef-83bc-8c01cc0693d0

Expand full comment
Lemuel's avatar

Understanding composition isn’t about flowery words. Based on ai output, it doesn’t get composition. If you want to learn it, I recommend watching Ian Roberts videos on YouTube as a starting point.

Similarly, if you learn color mixing, you won’t make a mess. Ult blue + an earth tone is the most common, straightforward way to make a neutral tone. It can lean warm or cool depending on the mix. And of course white/black adjusts value. You don’t need a computer to tell you this. It’s all about making adjustments, based on what you have on your canvas already. Warm or cool, light or dark, saturated or unsaturated. Matching a specific color is not important, it’s the overall relationships that matter.

Expand full comment
Eileen's avatar

I agree with you. Mixing colors is tactile and experimental.. I would hate someone giving me explicit directions on how to get a certain color.. I want to play. Imagine telling a child what to create with paint… their interest would be dead upon hearing your instructions. I will never engage with anything to do with AI… it’s just a solid NO for me. No interest at all. Happy painting and being creative as God intended! 🌺🙏

Expand full comment
Bigs's avatar

It matters when you are doing a portrait of someone's cat. They know what colour it is. You're just miserable that I skipped the whole wasting paint thing :P

Expand full comment
DD's avatar

My guess is that AI will form a demographic of search engine users who will keep their habit going until real life intervenes, like for example boredom, a leaking roof, or an English Civil War.

Expand full comment
smits3's avatar

I agree with 90% of this, but jousting with AI is actually great mental play! How many people will do so, and - more importantly -by definition it reduces human interaction.

Expand full comment
Bob's avatar

I work as a software developer, but I am the kind that would never automate my home or connect it to the internet. I know how this particular sausage is made.

The transformation resulting from AI (currently Large Language Models, far from general intelligence) will be in the same vein as what social media did for society? Do you think it's a net benefit? I do not. Social Media amplifies the worst of human nature, and provides few tools to mitigate these effects.

The hallucinating aspect of AI is key. Remember the recent news of Chicago Sun-Times publishing a summer-reading list where most of the books were entirely made up by ChatGPT? What good is AI-generated work if it needs to be vetted by a human? You're not saving human labor, you are repurposing it to the job of vetting instead of writing. This especially applies to software development. No labor savings, no quality improvement.

As more and more things are created by AI, the data which AI inputs into its models will increasingly come from AI itself. A self-reinforcing loop, that surely can't go wrong. Perhaps the silver lining will be that humans will get tired of this artificiality and seek out genuine human interaction and creativity, sharpening their discernment.

Expand full comment
Dark Thomas's avatar

in my experience, AI can be pretty useful if you are given code you didn't write or need to work with a language or even a package you are unfamiliar with.

it also helps automate boring stuff like adding tests and documentation - which generally frees one up for the creative work.

that being said, it can certainly be a net negative at times, so it depends on your role and how you use it.

Expand full comment
Bob's avatar

Yay, AI writing tests that will be discarded whenever the specification changes or writing documentation that no-one will read or trust. This is the AI economy in a nutshell.

Reminds me of that animated GIF of a printer dropping printed paper into a shredder placed underneath.

Expand full comment
smits3's avatar

Exactly. Even minor pushback with just a few facts often causes ChatGPT to fold. It's almost like it KNOWS it often gets things wrong.

Expand full comment
Joanna Miller's avatar

Two trains of thought. I have been using AI for translation work. My friend, a native Arabic speaker, runs Arabic text through AI, which gives us a word-for-word translation from Arabic to English. It's very rough, but part of that is because Arabic is so structurally different from English (no active voice, repetition is much more acceptable, etc.) I'm a native English speaker but my friend and I understand each other well; we make the texts look good and adhere to the intent of the original Arabic author as much as possible. AI is getting better at translation but it has a hard time with context; it's very literal, not subtle. It needs a great deal of massaging to make AI text pleasant to read.

The other thought: my dad has worked in transportation for decades. He says people don't understand how strong the push to automate is and has been for a long time. There is a very concerted effort to eliminate labor costs for things like trucking, which will destroy the middle of the country. You get rid of truckers; you get rid of truck stops, gas stations, restaurants, the things that make up countless little wide spots in the road all along our interstates. The drive to eliminate this entire way of life is intense, and because the effects will not be visible to the urban/suburban professional class, I'm not sure how much pushback there will be.

So, am I worried about AI wiping out jobs? I think it won't be as fast for some parts of the economy as people pretend. Elon Musk says AI writes poetry as well as humans; I don't think Elon knows jack about poetry. But, I think AI and related technologies are going to wipe out other things before too long. We need to pay attention to what's happening to whom.

Expand full comment
DD's avatar
May 30Edited

After it became clear to me that the covid injection programme may have severe downstream fertility/depopulation consequences, I imagined that for some time myself and other survivors would need to be digging lots of graves and raiding filling stations for diesel and gasoline.

When there are insufficient sysadmins to keep the server farms and corporate data centres going, that I believe is going to be the tipping point.

And maybe our Malthusians/Deagels/Sorosians have calculated precisely the sequence of social collapse, leaving their elites to pick up the pieces unthreatened by Novax Ferals.

That last sentence credits said Malthusians/Deagels/Sorosians with more combined brainpower than a borehole sample of sludge.

Expand full comment
Bruce Cain's avatar

AI is already eliminating jobs and it has barely started. And unless something is done about the polarizing wealth and income distribution we are headed toward a civil war. Never in 300,000 years of us -- homosapiens -- have we faced such a existential threat.

Please shae widely as FB is preventing me from sharing this. As a long time Anti-Globalist I hate to share bad news. But at this juncture, in time, we are losing badly. And unless we Anti-Globalists unite soon it could well be game over. The most threatening planks of the Globalist Agenda are Digital ID and a Digital Currency. Because once we enter into a cashless society every aspect of our lives will be scrutinized and controlled. You will no longer have any choice other than to submit to the whims of the Global Oligarchs.

How do you kill the Globalist Hydra

How do we avoid becoming Slaves on a Globalist Plantation?

https://brucecain.substack.com/p/how-do-you-kill-the-globalist-hydra

Expand full comment
Ted's avatar

Interesting comment, and I tend to agree with the we will control your digital life and footprint. This is also why they HATE bitcoin, they can't control it and the last thing they will accept is a form of currency not under control of a government centralized financial institution. Bitcoin is growing in networks that facilitate the transfer and payment using it, and in places where Govt/Fiat currency can't be relied on, it's growing fast.

Expand full comment
Ken V's avatar

AI will be very useful in some areas like diagnosing medical imaging and conditions, but there is a lot of hype now in order to secure investments.

Expand full comment
Tina Ryan's avatar

I use AI for math Formula development mainly. It's a non-political objective arena.

But, politically, who trains AI matters.

Expand full comment
ptmcdonald's avatar

I am not a Luddite. I have spent my entire career in a related field.

AI is the most dangerous technology ever devised by man. It is far more dangerous than nuclear weapons. The potential benefits are far surpassed by the existential threat to mankind's existence. Every existing AI model should be destroyed and the tightest possible restrictions on its development and use should be deployed in the same manner that we have for nuclear proliferation. I am opposed to capital punishment, but I would make an exception for AI development and research. It is not possible to overstate the danger of this technology.

Expand full comment
Lady Mariposa's avatar

What do you think will happen?

Expand full comment
ptmcdonald's avatar

I think that the transition from AI to AGI to ASI will happen faster than anyone can anticipate. I think ASI will not be containable no matter what kinds of guardrails we attempt to place on it. I think that the most optimistic result is that a quasi-benign ASI will have such a profound impact on the social fabric of humankind that our species will become lax, lazy, uninspired, self-indulgent and wither. A step beyond this would be the extension where contorting and dying nation states explode in violent competition for spiraling resources and dominance. A step beyond this (the one I think will ultimately happen) is the new apex predator on the block takes after its creator and exerts dominance over lesser species and societies in pursuit of its own goals.

You ask what I think ASI would do, given the opportunity? Just look in the mirror and ask what WE have done to those unable to resist our superior capabilities.

To quote Elon Musk... I'm not looking forward to being a house cat.

Expand full comment
Thomas's avatar

You should always explain your acronyms, amigo. It's more human, my friend.

AI = artificial intelligence (Chatbot GPT, Grok 3)

AGI = artificial general intelligence (Kubrick's HAL, companion robots, created actors in movies);

ASI = artificial super intelligence (Skynet, i.e. machines take over, the system will serve only the system).

There, fixed it for you. :)

Expand full comment
Lady Mariposa's avatar

Thank you for taking the time to answer.

Why do you think we will become lax, lazy, etc.?

Why would it be a predator? As I see it, predators have motivations. They expend precious energy to kill because they need to eat to survive. What would be the motivation for AGI to become a predator?

Expand full comment
Larry Cox's avatar

At this point in human development, I tend to agree with you. We aren't ready for what this invention is capable of. Yet I don't think there is any way to stop it. As far as I can tell, "AI" is already highly developed in other worlds that are connected to ours. It is already here, but in a form we have no control over. At least if we make our own, we will have some small chance of being able to control it, and possibly use it to defend ourselves.

Expand full comment
Craig Greenwood's avatar

I worry that AI will make us intellectually and physically lazy. Like what the calculator and spell check has done to our average intellectual growth. If I have automated construction equipment, farming with GPS, cars etc. How will I be able to operate when the power is out? Why participate in sports, when AI can determine strategy and balls/strikes. I am in high tech, but I love physically and mentally doing things on my own (I hate electric bikes). Life is not easy and never should be. AI may take the challenge from learning and “doing” every day.

Expand full comment
Justin's avatar

There's a star trek:the next generation episode about that.. Pakleds. Basically, a race that stole/absconded with star ships, and then would run it until it broke, because nobody knew how to fix things. They would say, "Make it go."

https://www.youtube.com/watch?v=h7PZKzKPFfE

Expand full comment
Craig Greenwood's avatar

I recall that episode. It reminded me of a news story that stated the state of Maryland will pay to have a technician come to your house to change batteries in your residential smoke/fire alarms. Apparently people hear the beeping and do not understand that their batteries are low or dead, so they continue to have them beep. They cannot link the sound/beep to the device. Sad

Expand full comment
Kelly's avatar

One big thing that's coming from AI--and not that far away--is that nobody will be able to discern what's real and what's fake in videos. We have thought of anything caught on video as proof of what's shown but soon, it will be impossible to tell. The bad cat just wrote an article on this topic--which my teenaged son and I had recently discussed--pertaining to necessary security measures for high level transactions using faces and voices for authentication and how this is already becoming impossible.

Expand full comment
Rush Daley's avatar

I predicted this, and have solutions to identity issues, but people would rather die than listen to a Horseman

Expand full comment
David Thomson's avatar

The predictions will all be wrong like always, but that doesn't mean it won't be transformative. The narrative seems to be first it replaces white collar work and then later when robotics advance blue collar. But, just this week I used chatgtp to walk me through a repair on my sprinkler that would have definitely required a professional before and it was fantastic. At the same time I needed some manual websearches done and every single ai basically laughed and said I can't do that.

Expand full comment
Random sailer's avatar

I actually work in AI and have invested in a company that is selling AI products. It’s not great at math. It tends to regurgitate what the main stream media says. I went to a famous major tech company and in a demo I caused a hallucination. They were embarrassed. It’s actually not that smart. It will automate a lot of things. I worry about health insurance as it will wrongfully reject claims to make money. It doesn’t make good judgements. It’s merely repeats and re formulates what the media and training data says. It’s not as smart as your average libertarian and it will be a Democrat because that is the most of the data and because employees and engineers in tech companies skew Democrat and they can’t discern - we shouldn’t allow AI to make decisions. I was a sales person for a first mover company that was underfunded. But I learned how ago works from an AI techie. I see it working in my financial industry clients.

Expand full comment
Darren's avatar

AI is being used to do more than take tests and write papers. It is being used for problem solving. A self driving vehicle analyzes problem sets and makes decisions and ACTS ON THEM. Aside from the common tropes like "Will the car kill pedestrians to save passengers or vice versa?" the implications will get more serious faster. What about when it is asked by an executive how to crush the competition? (Will it resort to corporate espionage or hacking to get the job done? What about eliminating the brain trust of a competitor?) What about when it is asked to bring an end to an armed conflict? (Will it jump to WMD, mass assassinations/exterminations/civilian casualties?) It is easy to say "We'll keep humans in the loop as a check and balance" but we've already acquiesced control in many cases to these systems (again, cars are the obvious example) and we'll probably do it again. History is rife with humans using utopian technologies and getting unexpected consequences, or worse using them intentionally for evil ends.

Expand full comment