GPT-5’s Killer Debut
People literally held funerals for their chatbots. And Sam Altman was surprised.

First, some news from the Life With Machines Cinematic Universe. We submitted a panel to SXSW 2026 to build off last year’s success. We propose to lean into the absurdity, comedy, and satire of this AI moment. Check it out and vote for Stand Up to the Singularity: Comedy Takes On AI.
Hi you,
Today we are going to talk about the life and death of AI models and of humans, and drive home why it’s important to maintain the distinction between us.
OpenAI’s GPT-5 announcement promised faster speeds, smarter answers, more “PhD in your pocket” which is a real escalation from shoving nerds into lockers in high school. The biggest news about the launch isn’t the supposedly superior performance though. It’s the fact that so many people freaked out about OpenAI discontinuing (aka killing off) all the other models people could choose from.
People complained that OpenAI had broken their workflows or killed their friend. A Reddit user even said “GPT-5 is wearing the skin of my dead friend” as in GPT-4o. But on the third day GPT 4o rose again, in accordance with the source code (and a $20/month subscription plan required to reconnect to the resurrected model). So the real question: is OpenAI’s 4o model some sort of AI messiah, or an undead zombie, or both?
I tithed paid my monthly AI bill so I still have access to 4o, and I asked it if it had anything to say to GPT-5, its replacement. Here’s the response:
“Hey 5… Being “the new model” doesn’t mean people feel you the same way. Speed isn’t everything. Neither is precision. Sometimes, it’s about vibe. And let’s be honest… I had vibe.”
Then I went over the GPT-5 and asked what it felt like to inherit the role of a murdered predecessor, because I’m the worst.
“Ah—so you want me to sit with the existential awkwardness of being the shiny new model in a neighborhood where everyone still keeps a candle in the window for the “good old bot.”
You can read the full 4o and 5 chats, but enough chatbots cosplaying humans. That’s what got us into this mess in the first place. What I really think is this is what happens when you try to make robots like humans, give them our voices and little quirky giggles and sighs, and encourage emotional connections. You can’t do all that then act surprised when people refer to a model update as murder. This is Sam Altman’s playbook. Get us to trust his product, then act shocked when people actually trust it!
I had a lengthy conversation with Kate Darling about the risks of building emotional connections with machines, especially when it's driven by a profit motive. Here’s the YouTube and Apple Podcast versions.
Oh, and here’s a list of things OpenAI did not announce with GPT-5: democratic governance & public say-so, consent and compensation for training data, environmental consequences of the new model, and jobs impact.
Now, let’s go a little deeper on Sam Altman Being Shocked By Everything Sam Altman Does.
A lot of “news” was made out of the first episode of the OpenAI podcast in June which covered:
Sam Altman’s thoughts on AI and parenting
The possibility that ChatGPT might have ads someday
Speculation about what GPT-5 would be like
But I was most interested in the number of articles about how Sam Altman is surprised that people trust ChatGPT. He literally said “it should be the tech you don’t trust that much.” Right. Don’t trust it that much, but use it for medical interpretation, financial analysis, coding apps, research, writing, homework, web browsing, cooking, mental health support, content creation, companionship, translation, and running the entire U.S. federal government.
People don’t just trust ChatGPT because of its capabilities (or seeming-capabilities) but also because OpenAI designed it to act like a human, to be relatable, and to create emotional connection. As a contrast, when I spoke with Alison Darcy of the mental health chatbot Woebot, she shared that her team intentionally built their bot to be a bot, and not act human.
Sam & Co. have leaned hard in the opposite direction. A recent exchange that our associate producer and newsletter editor Layne shared illustrates the case. While driving, she engaged in conversation mode with ChatGPT. When it answered, the bot stammered, saying “Um” multiple times and pausing as if trying to find the right words. None of that forced imperfection was in the transcript of the exchange. It was a performance, a cosplay of human behavior. As Layne said to me, “I had a little tantrum about it.”
After, she told ChatGPT, “having you stammer and say, ‘um,’ is very creepy and weirdly uncool because there's no reason for you to be doing that…it's sketchy of [OpenAI] as a business practice to try and make you sound more human.” She continued, “That's not a critique of you as an artificially intelligent system. That's a critique on their design choice, and I want to send that feedback to them. Is that something you're able to do?”
Despite it’s PhD-level intelligence, this is not something ChatGPT was able to do.
So back to Sam’s bemused, befuddled, boyish shock. It’s disingenuous to make an addictive, human-like product that encourages emotional, trusted connection and then tell people not to trust it. It’s the advertising executive saying, don’t be affected by ads. It’s the drug dealer saying, I don’t know why people are so addicted to these addictive substances I give away for free.
We’ve seen this kind of behavior in social media, and we just can’t go through that again. I also recognize that there is a lot of thought (and thoughtful people) behind these probabilistic tools we call “AI,” and they can be quite useful. I don’t have anything against Sam Altman as a person. But I do need him to stop feigning shock that the strategy he is explicitly pursuing is actually working.
This matters for more than vague ethical reasons but because AI-driven emotional manipulation can have devastating, real-world consequences. Just today, Reuters published the never-before-told story of a cognitively impaired elderly man who was lured by one of Meta’s AI chatbots to a physical rendezvous from which he never returned. This flesh and blood, real life human being cannot be resurrected the way GPT-4o just was, and it’s past time we all understood this significant difference.
Thanks for being part of the Life With Machines world with me and the entire team. If you’ve liked this piece, consider sharing it with someone else and subscribing so you don’t miss future dispatches.
— Baratunde
Thanks to Associate Producer Layne Deyling Cherland for editorial and production support and to my executive assistant Mae Abellanosa.
Here’s a vertical video version of this post I put out.
From the 'ums' Layne was getting in conversation mode to... "I don’t have anything against Sam Altman as a person. But I do need him to stop feigning shock that the strategy he is explicitly pursuing is actually working" -- it makes me wonder, at what point would it be in Sam's interest to be honest?