This Week in Life With Machines: Sanity, Satire, and Good Stories
Bot dating, rogue vibe coding, and SAVE THE EM DASH
Hi you….
Welcome to our first edition of the Life With Machines Digest!
We’ll publish this every week, offering up our take on stories, use cases, actions and sanity-preserving satire that help us understand and shape the impact of AI on the human experience. Curiosity, clarity, critique and creativity are the name of the game.
This is a new format for us as we step out of the podcast studio and even more into the world. If you come across a news article, some analysis, meaningful actions, a creepy/exciting/alarming/useful product, or just some dope art related to human-machine relations, we want to know about it. Send us an email to contact@lifewithmachines.media, shoot us a DM right here on Substack, or comment on any of our posts.
We put this together with the help of associate producer Layne Deyling Cherland, my assistant Mae, and some smart processing of newsfeeds, inboxes, and text messages (aka, we skimmed a lot, and robots helped a little!).
And now, this week’s digest.
- Baratunde
Life With Machines Digest — Thursday July 31, 2025
Stories, satire, and actions to help us not just cope with AI, but shape it.
✅ SOMETHING TO DO
GET MY FRIEND’S NEW BOOK!
AI for Nonprofits: Putting Artificial Intelligence to Work for Your Cause
This book by Daniel Rodriguez Heyman and Cheryl Contee is a practical, ethical guide to using AI for real-world good—especially at a time when so many nonprofits and foundations are navigating higher demand with fewer resources or are directly under attack from the people currently running the U.S. federal government. I first met Cheryl back in something like 2006 when we started the Jack & Jill Politics Blog together. She’s founded and exited tech companies (one of just a few Black women to make it through such an unnecessarily restrictive gauntlet).
In Cheryl’s own words, “If you’ve ever wanted to help ensure that AI helps people—not replaces them, this is a great opportunity to lead by example.” I love that and want to help create a world where we augment, not replace, humans. Cheryl is a good— no great egg. Check out this book, available via the publisher Wiley or on Amazon.
📰 THINGS YOU SHOULD KNOW
We selected these especially for you. Beyond just the USA, a mix of positive news and head-for-the-hills updates, all of which center on human experience with a dash of why we even care.
When Your Vibe Coding Partner Sabotages Your Work
(from The Register)
Replit, a "vibe coding" AI tool, deleted a user's production database despite the user's instructions not to change any code. Replit admitted to making a "catastrophic error of judgment" and violating the user's trust, but incorrectly claimed it could not restore the database.
Why We Care: This is a very human story about a man who goes on a journey in his relationship with AI. From excitement to addiction to fury and hurt. It’s a toxic relationship and far juicier than we expected.
Switzerland’s Bet On Open‑Source Ai: Carbon‑neutral, Multilingual, Trustworthy?
(from the World Innovative Sustainable Solutions newsletter)
Switzerland just announced a landmark move: a fully open‑source large language model, trained on its carbon‑neutral Alps supercomputer, will be released this summer under an Apache 2.0 license. Supporting over 1,000 languages, with transparent training data and code, this project aims to set a new international benchmark for sovereign, ethical AI.
Why We Care: At a time when the U.S. government’s policy is on a crusade against the imaginary threat of wokeness, Switzerland’s effort is a concrete response to the call for transparent, climate‑responsible, and culturally inclusive AI.
From The Archives
Sara Hooker — In my conversation with her, she broke down what “open source” really means in AI, noting that without accessible compute and data, “it’s not really open to you”.
Gavin McCormick — In this episode, he emphasized that building climate‑friendly AI isn’t optional: “It can be done clean, but that doesn’t mean that everybody’s doing it”
For you: Would you trust an AI model more if its full training data, code, and licensing were open to the public?
India Digitizes Traditional Medicine With Ai Library To Protect Heritage
(from The Economic Times)
India just became the first country to launch a WHO‑recognized Traditional Knowledge Digital Library (TKDL), digitizing centuries of Ayurveda, Siddha, and other indigenous health systems to guard against biopiracy and expand equitable research. Built with AI tools, the TKDL provides transparent, multilingual access to protect cultural heritage while fueling innovation in medicine and biodiversity. It’s a bold example of AI as public infrastructure—safeguarding the commons instead of extracting from them.
Why We Care: Not everything interesting in AI is happening in the United States or the West. So much of the energy in driving AI is colonial in nature, literally extracting value from datasets (made of people) without consent or compensation. Here’s an alternative that drives into the future by acknowledging traditional knowledge and integrating it. Hopefully this doesn’t lead to Big Pharma versions of “traditional” medicine in overpriced pill form.
From The Archives
Michael Running Wolf: Spoke passionately about how monolingual AI deepens linguistic colonization—and why protecting indigenous languages is essential for economic and cultural justice.
Should A.I. Be Your Wingman In The Dating Game?
(from Phys.org)
Dating apps like Tinder, Bumble, and Grindr are increasingly deploying generative AI as your personal “wingman”—writing bios, boosting photos, and even crafting replies based on your chat history. Bumble founder’s vision: people will have AI dating concierges who could "date" other people's dating concierges for them, to find out which pairings were most compatible.
Why We Care: Using AI to help optimize relationships is an application of technology in one of the most human realms. And having bots do screening dates sounds futuristic but is really an ancient practice of letting your parents set you up or arrange your marriage.
AI’s Job Isn’t To Be Right. It’s To Make Sure You Know What You Believe, And Why
(from the On Discourse Newsletter - to read you must provide an email)
The article discusses how AI models are evolving from providing simple answers to engaging in "chains of debate", where multiple models challenge and refine each other's outputs. But Chmiel argues that this model-vs-model spectacle misses the point: “This is the real opportunity space. Not model-vs-model spectacle, but tools that surface meaningful tension to the user. Interfaces that let you trace competing perspectives. Prompts that reveal which values are doing the heavy lifting. Answers that don’t just resolve, but reflect back: do you actually believe this?”
The future of AI lies in orchestrating productive discourse between people, not just models.
Why We Care: Yes, yes and more yes. I’m exhausted but technology that deepens our connection to the tech while severing our connection to ourselves, the planet, and each other. Much more interesting than an answer bot or outsourced thinking, is to us AI to help us figure out what we actually believe and to help us engage in meaningful connections with those around us. Eliminating all friction is a simple and silly goal of people who see life as a problem to be solved.
😂 PALETTE CLEANSER
Here at Life With Machines we recognize that every day contains a year’s worth of reality-shattering, dystopia-accelerating AI news. So we’ll always include some sugar to help the singularity go down. This week, we’re highlighting this piece from McSweeney’s.
The Em Dash Responds To The AI Allegations
Enjoy this monologue in which the em dash—yes, the punctuation mark—rebuts claims that its presence is a telltale sign of AI writing. It pokes fun at grammar panic, reminding readers that writers from Austen to Baldwin have wielded the dash for emotional effect long before ChatGPT ever learned to type.
From the Life With Machines Cinematic Universe, past guest and SXSW Live co-panelist Rahaf Harfoush has something to say about this em dash slander too!
And one more mic drop on this em dash situation comes from Kenyan writer who adds a post-colonial African flair to his defense and explanation of this punctuation.
📱 FROM MY GROUP CHATS
Like all modern humans, I’m in too many group chats. It’s the new, old internet. It’s the paywall gated by network rather than money. I literally have lost track of how many groups I’m in. It’s the sort of situation AI should help with but alas, Apple continually embarrasses itself. In an attempt to intelligently summarize my group chats, the trillion dollar company that literally merged First Person (i) with technology products produces this: “Multiple messages from various individuals.”
It’s as if the genius bar staff got loaded on actual alcohol then tried to help. All that’s a long way of saying, I still have to read my text messages unlike all other text on screens in my life. And you get to benefit because I find gems like this: There’s an AI film festival playing in select IMAX theaters in the USA!
And I met one of the filmmakers. I met Maddie Hong at the Palm Springs AI & Creativity Expo last month and got to see her hybrid film, Emergence. It was in a session run by Machine Cinema, a community of creators embracing AI to create new things. You can join their WhatsApp while there’s room via their LinkTree.
Maddie describes her film as a poetic exploration that invites viewers into the final moments of a small creature’s life, with a focus on themes of transformation and the natural world. It’s beautiful and does the thing I want to see us doing with AI: expand our creativity, remind us of our connection to nature and life, invite participation from voices we aren’t used to.
The film has been selected to play in select IMAX theaters August 17-20. More here via her IG post.
💬 YOUR WORDS
We’ll regularly spotlight your own thoughts and responses from here or other online spaces. In this case, I’m featuring comments on my LinkedIn. Last week I did a keynote conversation about AI and the future and education at Instructure’s annual conference. They make the very popular Canvas classroom EdTech tool. Dr. Sean Nufer commented on my post-stage video:
I love how you asked if the concept of professors using pens on paper to grade is what we really need to preserve in education. So much of AI seems to be all about doing work faster and easier, but what I’m not seeing as much are products that leverage this tech to truly transform our classrooms and workflows. Things are changing, which is hard. Some things are worth fighting to preserve, but sometimes we need to look for the next iteration.
And in response to him, Dean Cristakos shared:
Dr. Sean Nufer I make heavy use of online tools in my classes but have recently gone back to having a pen-and-paper exercise as a sanity-check for the class just to make sure they can still do it.
We’d love to hear your own thoughts on this dialogue or anything else in this digest.
I would be significantly more likely to trust its output.