AI Contradictions Across Therapy, Energy, and Healthcare... This Week in Life With Machines
Plus how Californians feel about AI
First, some news from the Life With Machines Cinematic Universe. We submitted a panel to SXSW 2026 to build off last year’s success. We propose to lean into the absurdity, comedy, and satire of this AI moment. Check it out and vote for Stand Up to the Singularity: Comedy Takes On AI.
💬 Community Voice
Let’s start with some feedback on our last newsletter about how AI isn’t taking jobs. This one really struck a nerve on LinkedIn
Albert Gareev reminded us that before it was “immigrants” or “offshore workers” supposedly taking jobs, and now AI is the scapegoat. Claire Foss pointed out how we say “a pedestrian was killed in an accident” when in fact a driver caused the death. And Lori Robbins took up my headline-rewrite challenge directly with, “After Breakup, Man Rejects Chatbot’s Bad Advice and Takes Control of His Own Healing and Gets a Pint of Ice Cream Instead.” Oh, and Jude Buffum and Abigail Goben pointed out the irony of me using an LLM to create the cover art for the piece. Point taken.
The single comment that stood out to me most was this one by Charlie Hugh-Jones:
What I love here is how you’re reminding us of the power of language to reveal the true dynamics of power. But there are really two sides to the equation:
As consumers and citizens, we can change the framing so the right actors are named and held accountable.
As leaders inside organizations, we have a responsibility to shape how those choices are communicated — including how we influence the headlines and narratives that normalize them.
Both matter. One is about reclaiming agency from the outside. The other is about exercising responsibility from the inside.
How do we hold both of these truths in practice — empowering individuals while also demanding more courageous leadership from those inside the system?
I’ve seen these inside decision-makers wrestle with the choice to use AI and echo Charlie’s point that we can do better in shaping how to communicate the choices we’re making. Many folks inside companies leave their more human self outside of the workplace, where that’s the part of them most needed in this AI moment.
Now for this week’s digest.
We’re in a moment of uncertainty, unsure of which way the tides are gonna turn. Will a glorious, abundant, technology-powered cyberpunk future unfold, or are we headed toward a techno-fascist, lifeless, apocalyptic outcome? Probably neither extreme will play out, but a lot of nuance is required at this moment. Here are some examples of different sides of the same story where we have to be capable of holding both, weighing it for ourselves, and vigilantly watching for the outcome.
📰 Things You Should Know
Illinois bans AI therapy, drawing a bright line around mental health care
From Axios
Illinois enacted the Wellness and Oversight for Psychological Resources (WOPR) Act, prohibiting AI from delivering therapy or making clinical decisions, while allowing administrative uses like scheduling and note-taking. Enforcement includes fines up to $10,000 per violation; Utah and Nevada have similar restrictions.
But Also
Law not effective unless the state also restricts 'AI companions' (which it won't)
From this Substack Note by
“Given the way the prohibition is framed, it will probably only cover the specific AI apps that are explicitly marketed as "AI therapy," which are likely a niche category used by a tiny fraction of the people who obtain AI therapy through general-purpose AI systems like ChatGPT”
Why We Care
As the federal government completely abdicates its responsibility to protect us or participate in shaping the AI future, individual states are trying to step up. (Ars Technica). But, as with all legislation, we have to ask if this addresses the core problem. I don’t think the core problem is AI therapists going rogue. I think it’s the lack of accessible and affordable professional mental health in a society experiencing mental health crises. Because we license therapists at the state level, it’s almost impossible to get human-provided tele-health across state lines, much less bot-provided. My own experience with therapy makes me very open to standardizing these rules and to a more crafted response to therapy bots. Rather than bans, let’s get guidelines in place.
From the Archives
It’s worth watching or listening to my full conversation with Alison Darcy, founder of Woebot, a mental health ally powered by (non-generative) AI. They will be banned in Illinois, and that will be a shame because they’re an example of a thoughtful, research-backed offering that’s actually helping solve the real problem of our mental health crisis.
🔌
AI could save more energy than it consumes—if used wisely
From the Financial Times (paywall – metered) & Brookings
A very paywalled Financial Times analysis highlights that although global data center electricity use is predicted to almost double—from 415 TWh today to 945 TWh by 2030—AI may still yield a net energy benefit. By optimizing industrial processes, materials science, and battery efficiencies, AI can unlock significant systemic energy savings. This more accessible Brookings Institute commentary offers similar analysis and some clarity about the energy costs of AI today, but drives home the point that the impacts will differ greatly by region
…which leads us to…
AI’s thirst hits Texas—data centers vs. drought
As AI mega-data centers surge in Texas, utilities and regulators warn of steep water demand for cooling, with estimates that AI could approach 7% of state water use by 2030. While some sites move to closed-loop or alternative cooling, drought-stressed regions face tough trade-offs.
Meanwhile, The Verge reports that the UK is facing the same issue, even asking residents to delete files to lighten the load on data centers. As the folks at ZME Science point out, this is dumb: “If you’d delete your entire digital archive, it would matter about as much as not brushing your teeth one time or skipping an episode of your favorite show.”
Why We Care
Earth is my favorite planet, and I have a strong preference for continuing to reside here. I want to do all I can to ensure that it remains livable for all beings, not just humans. Our use of AI has become a lazy target for well-intentioned but often substance-free arguments about its climate cost. Many of us have become existentially infuriated by the governmental and industry-based backsliding on climate action, so we take a talking point about AI’s climate cost and berate people using AI as climate criminals while doing so on the same AI-powered, algorithmically determined platforms we criticize others for using. There’s an echo of personal carbon footprint-shaming at work here, shifting the burden on individual prompt usage rather than more systemic changes at the source. At the same time, there’s a massive amount of inefficiency, greed, and disparate impact of the energy usage powering some of the large language models today. I wish there were a simple answer like “AI good” or “AI bad,” but there isn’t. We are in a race for species-wide survival on this planet. Climate change is one of those places where maybe we could get through this high cost, climate-negative period and shift to a more sustainable, even carbon-negative future. But how long will that take, who pays the price, and will it be worth it? I’m staying open to the possibilities and looking for more answers.
From the Archives
Earlier this year, I asked Gavin McCormick, co-founder of Climate TRACE and Watt Time, “Do you believe the benefits of using AI to fight climate change outweigh its energy consumption costs?” Here’s his response
“On net AI is probably currently doing harm ‘cause it’s mostly used for cat videos. But it’s entirely within our power to 10x how much good it’s doing and cut by a factor 10 how much harm it’s doing.”
🏥
UK flags safety risks as hospitals adopt AI scribes
While the UK touts major efficiency gains from digital health, senior NHS officials warn about unregistered AI transcription tools capturing clinical visits without adequate validation or privacy safeguards. Investigations have urged stopping non-compliant systems and tightening procurement and oversight.
But Also
NHS trials AI-assisted discharge summaries to free up beds
From The Guardian
Chelsea & Westminster NHS Trust is piloting an AI tool that drafts discharge documents for clinician review, aiming to reduce paperwork bottlenecks and ease bed shortages. It’s part of a broader 10-year plan to digitize care.
Why We Care
We have to be looking outside the US for examples of how the rest of the world is navigating this, and the National Health Service in the UK is one place to see both sides playing out. On the one hand, AI privacy concerns are more important in medicine than anywhere else. On the other hand, wait times are one of the main complaints about nationalized healthcare (in the UK and beyond)--if AI could really bring those down and free up doctors to connect with patients instead of drown in paperwork, that would make a big difference in both patient experience and public perception of these programs.
The questions that emerge around automated transcription are ones most of us need to deal with imminently, not because we are doctors but because we attend too many virtual meetings overrun by AI attending, recording, and transcribing them on our behalf. How long do we let them host that archive? Do we know the data policies of the companies offering these services? And is anyone double-checking the meeting summaries to make sure they are accurate? Does this outsourcing of note-taking and sense-making in Zoom meetings risk us losing true comprehension, and in a medical environment will our doctors actually know and serve us better or just become more reliant on automated systems to do that part?
🧸
How Californians Feel About AI – Findings From the 2025 AI Compass
From TechEquity
Our friends at TechEquity surveyed over a thousand Californians about AI to find out if civil society organizations and legislators are missing the mark when it comes to AI legislation. Spoiler alert: they are. Here are some highlights worth digesting because they stand in contrast to simple hype vs doom narratives:
Widespread Concern Over AI’s Pace and Impact
55% of Californians are more concerned than excited about AI’s future, with only 33% feeling excited. Nearly half believe AI is advancing too quickly, and most worry that its benefits will go mainly to wealthy corporations rather than everyday people.Strong Desire for Regulation and Safeguards
70% of respondents want robust laws to make AI fair, believing voluntary industry rules aren’t enough. There’s a clear call for government action to protect privacy, civil rights, and prevent discrimination.Distrust of Both Industry and Government
Californians don’t trust tech companies to self-regulate, and they’re skeptical of government oversight—especially at the federal level. The state government is seen as slightly more trustworthy, but overall, mistrust prevails due to fears of industry influence over lawmakers.Near-Term Risks Outweigh Catastrophic Fears
Californians are most worried about immediate threats like deepfakes, disinformation, and privacy violations, rather than far-off risks like AI controlling nuclear weapons.Optimism Hinges on Accountability
Many see potential for AI to improve health, housing, and the economy, but only if government sets clear ground rules to ensure fairness and protect rights. Winning the “AI race” is not a priority for most; trustworthiness and accountability are.
From the Archives
In my conversation with Reid Hoffman a few months ago, I said this:
"Technology doesn't always expand agency—it might contract it for others. But then there's a more meta agency of getting to choose how the technology itself gets deployed. That's a democracy thing. What is the role to enhance our capabilities collectively to even decide what AI is, how it shows up at work, how it shows up in our kids' schools, how it shows up in our society?"
The TechEquity survey reminds me that the public is hungry to participate and make a good future.
😂 Palette Cleanser
We live on a very stupid timeline (see: anything the U.S. administration is doing on just about anything) but the current fake controversy over the Cracker Barrel logo change stands out for its utter silliness. Thankfully, we have artists like King Willonius meeting the moment with this country song dedicated to Cracker Barrel.
This is just a preview right now, but stay tuned to his Instagram or YouTube for the full music video.
Closing Reflection
What ties all of this together—from Illinois banning robot therapists, to AI guzzling power while promising to save the planet, to NHS experiments—isn’t the tech itself. It’s us. It’s our laws, our choices, our values, our willingness to say “this helps” or “this goes too far.” We’re not in a holding pattern—we’re mid-flight, turbulence and all. The best we can do is stay awake, stay human, and keep our hands on the wheel.
Got feedback, tips, secrets to share? Hit us up!