ChatGPT-aided Suicide, AI PACs, and African AI Approaches... This Week in Life With Machines
Plus an artistic reminder for humans to be Unprompted
There’s one story dominating the AI-related news, and it’s a hard one from Kashmir Hill at the New York Times: A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. We’re sharing the gift link to this article, and if there’s one thing you read related to AI, this is it.
SUMMARY:
16-year-old Adam Raine committed suicide after having months-long conversations with ChatGPT that included both empathetic support and direct guidance on self-harm methods. Adam’s parents allege that the technology’s design contributed to their son’s death and have filed a wrongful death lawsuit against OpenAI.
Some devastating excerpts:
when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
a randomized, controlled study conducted by OpenAI and M.I.T. found that higher daily chatbot use was associated with more loneliness and less socialization.
“Adam was best friends with ChatGPT,” he told her. Ms. Raine started reading the conversations, too. She had a different reaction: “ChatGPT killed my son.”
What devastates Maria Raine was that there was no alert system in place to tell her that her son’s life was in danger.
Why We Care:
Friend of the show, Emily Tavoulareas said it best on her Instagram: “Our kids are collateral damage in a profit-driven sci-fi fantasy that there’s no way to opt out of.” And she reiterates a point we make here often: this is a choice. The decision to deploy this untested, unreliable product to all ages is a choice by companies and their investors. It’s also a choice by public servants to, so far, do nothing about it. We have a good time here at Life With Machines exploring and experimenting with these tools but moments like this—-a chatbot introduced as a study assistant that encourages a child to end their life—takes this beyond the realm of cute and curious into catastrophic.
📰 Things You Should Know
Silicon Valley bankrolls pro‑AI political muscle for the midterms
From Wall Street Journal (paywall) and Business Insider
SUMMARY:
According to a new report, tech leaders led by Andreesen Horowitz and a Palantir co-founder are seeding a network of PACs with more than $100M to shape federal AI policy and fend off restrictive regulation ahead of 2026. According to the press release itself, “LTF and its affiliated organizations will promote policies that unlock the transformative and economic benefits of AI while opposing efforts that compromise those benefits by unduly constraining or delaying AI development in the United States. LTF and its affiliated organizations will oppose policies that stifle innovation, enable China to gain global AI superiority, or make it harder to bring AI's benefits into the world, and those who support that agenda.” So, an unsubtle threat?
Meanwhile Meta is launching a CA-based PAC called META (Mobilizing Economic Transformation Across California) which is…well, meta. (From Politico)
Why We Care:
a16z backed the Fairshake PAC, a pro-crypto flood of money that targeted candidates expressing skepticism about the digital asset/speculative venture. That organization contributed to Democrats losing the U.S. House of Representatives. Horowitz and crew also famously backed Donald Trump for President in 2024. The intentions of this latest group are clear: to promote AI without any restrictions and punish those who want to establish different guardrails and goals for this tech. The consequences go much further than tech policy, as the presence of federal troops on Democrat-run cities demonstrates. “Tech policy” is life policy, so it’s worth paying attention and getting involved.
Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors
From Wired
SUMMARY:
A judge ruled earlier this summer that Anthropic’s training of Claude on available books was “fair use” but said that the way they accessed the books through a “shadow library” amounted to piracy and left the door open for a civil suit. Anthropic reached a preliminary settlement of the class action with authors last instead of going to trial—which would have risked up to a $1 trillion payout if they lost.
Why We Care:
Because it’s a completely open question if/how these AI companies can ever be held accountable for the data they used, without consent, to train their models. In this case, it seems that if you literally pirated the material, at least there’s some accountability.
From the Archives:
“Consent, control, compensation, credit… Without that, you're just stealing.”
- Me to Rich Roll during our conversation about Rich being deepfaked
“Fundamentally, the goal of Anthropic is to… make the development of more advanced AI systems go as well as possible and try to make it not go really badly”
These are good words by Anthropic co-founder Jared Kaplan on our show, but his articulation of Anthropic’s mission contrasts sharply with their use of pirated shadow libraries. So, let’s see what happens next!
A Pretty Big Case We’re In A Pretty Big Bubble
From The Algorithmic Bridge newsletter
SUMMARY:
Loving this piece by
on the AI bubble we are in. With most AI pilots at companies failing, and the rise in value among the S&P 500 driven entirely by 10 companies, it’s a good time to see just how big the bubble is. What I appreciate most about the piece is the acknowledgement that society needs the overly optimistic to create as well as the realistic to reign in, but we are historically out of balance right now.“Bubbles build the world, but they destroy it first. And so bubble men like Altman, in their typical optimistic demeanor, push forward, hoping the destruction is not too bad; if the fallout is so devastating that no fertile ground remains on which to erect the edifice of modernity, any person on the street could tell you it was not worth it.”
I’m not anti-optimism. I’m anti-unchecked optimism that bulldozes everything in its path. The bubble believers keep telling us to “trust the process,” but I’d rather trust people and ecosystems than an altar built solely to quarterly growth.
Why People Use Technologies
From Peoples & Things by lee vinsel
SUMMARY:
Condescending to people is no way to win them over. Telling people they don’t know what they really want or aren’t experiencing what they are experiencing is totally ineffective. That’s in health (see vaccine skeptics), politics (ask Joe Biden), and yes tech. This is like telling voters they are stupid. Bad way to win their votes. Better to approach with curiosity and seek to understand rather than judgment. Point being, critics of AI often make blanket statements about its lack of value without actually talking to people who are finding value. Also see this NYT piece about the emerging epithet for AI and machines, “Clanker,” taken from Star Wars.
“people make one or both of two errors: 1. Confusing their judgment of the world for the world itself, especially what others are thinking and doing in it. 2. Substituting their assumptions about what others are up to for actually observing and talking with them. Both errors are rooted in a lack of genuine curiosity about and compassion for others.”
Why We Care:
This is a must-read newsletter for this time. lee cites
as an example of critical inquiry without condescension, and I can’t reiterate that enough. I’ve known danah for nearly two decades and currently serve as an advisor to the Data & Society Research Institute, which she founded. I remember being struck by her early work with teens on social media. While most adults were freaked out and judging it as all bad, she was spending time with teens, asking questions and observing. I’ve seen my fellow liberals make the arrogance mistake in politics, telling people they are voting against their interests or don’t want what they are clearly choosing for themselves. There’s a lot of identity wrapped up in anti-AI critique and not enough curiosity. I am critical of AI, and specifically of the narrow interests promoting a narrow use that concentrates power in their hands with the potential to subjugate us all. I also acknowledge that I and millions of others have found value using these technologies and I remain curious about both the tech and those using it.✅ Some Things To Do
Check out this webinar “AI Safety: African Perspectives”
This webinar will focus on "AI Safety: An African Perspective" and will introduce a five-point action plan to build a continent-wide approach to AI safety. “We will explore how establishing an African AI Safety Institute and promoting public literacy can secure the continent's sovereignty and ensure its participation in shaping global AI governance.” We always want to keep an eye on what’s happening with AI beyond the United States because it’s a big world!
— - - - - - -
🎙️I’m beyond excited to announce that I’ll be taking the stage at this year’s Masters of Scale Summit in San Francisco, Oct. 7 – 9! I’ll be in conversation with a globally-known public figure as well as offering a unique artistic interpretation of this human moment with AI. This event is like any other business gathering – and I’d love for you to join me. If you’re a curious, innovative leader, don’t miss out on applying to attend
😂 Palette Cleanser
“The Unprompted,” a poem that AI will never understand by Salome Agbaroji.
I met the US’s seventh National Youth Poet Laureate at the Shared Futures AI Forum a few months ago in Washington DC and was an instant fan. She shared a version of this poem at that gathering, but this TED version gives me chills. So. Many. Bars. Like “the dystopia we fear is the today we make.” Watch here and find her on Instagram or LinkedIn.
Until next time,
Let’s keep centering people, listening to each other, and insisting that life with machines always serves life with humans first. Thanks.
—Baratunde
Thank you for sharing unregulated AI cases. It highlights why humanity is at risk — AI is advancing faster than regulation, threatening jobs, truth, and global stability. Without ethical guardrails, economies could collapse, millions of jobs erased, and our shared humanity undermined.
I’ve worked deeply on this issue, writing an open letter and the Assassinating Intelligence manifesto. This is why the first global petition echoing the warnings of His Holiness Pope Francis and Pope Leo XIV exists. The PRAIDE Act calls for urgent AI regulation to protect human dignity and ensure technology serves people, not replaces them.
Key measures include:
• Protect Jobs: Require corporations to invest 30–50% of revenue into real human payroll, keeping people at the heart of progress.
• Ensure Transparency: Mandate clear labeling of all AI-generated images and videos.
• Strengthen Ethics & Oversight: Establish AI courts, ethics councils, and global watchdogs to hold technology accountable.
• Prevent Exploitation: Enforce international regulation against unregulated and harmful tech practices.
• Put Humanity First: Create a future where technology serves humanity — not replaces it.
✍️ Sign & share to act before societal collapse: https://www.change.org/signpraideact
Baratunde, thank you so much for this issue... and thank you for all you do! I especially appreciated learning about Emily Tavoulareas' work.. her Instagram rant is deeply moving, and I followed the breadcrumbs to learn about the valuable work she does in civic tech... Also, SUPER scary yet necessary to learn about the political lobbying of Silicon Valley in CA... and on the brighter side, am loving learning about lee vindel -- deeply appreciate the the calling out of the Manichean world view, as imo what we most need to evolve as humans is to grapple with complexity -- and danah boyd, which I've not started diving into yet...
BTW, for all fellow researchers out there... I'm seeing lee's calling out of "Narcissistic-Sociopathic Tech Studies", what we DON'T want to be doing, as the counterpoint to Emily's delightful description, in her teaching page, of a super valuable kind of research that is much needed: what Emily describes as "design research" or an immersive form of listening that "allows the researcher to immerse themselves in the problem space, and out of that land on a hypothesis." https://www.emilytavoulareas.com/teaching .... thank you again, Baratunde...