Reality on Trial—Sam Gregory on Life With Machines
And the exhaustion of trying to prove EVERYTHING
Hey friends,
What happens when reality starts glitching? And what happens when the truth gets outpaced by the lie?
Last week on Life With Machines, I sat down with Sam Gregory—the Executive Director of WITNESS and one of the world’s leading experts on AI, human rights, and the information apocalypse. Sam’s been documenting injustice since the Rodney King video (a defining moment in my own coming of consciousness). Now he’s confronting a crisis that’s harder to record and even harder to verify.
AI-generated media is coming fast. And the burden of proof? It’s escalating. We’re being told to spot the glitch, verify everything, and somehow stay sane while reality becomes a choose-your-own-adventure. But what happens when even the experts get fooled?
We get into the threats and the tools: deepfakes, provenance systems, detection equity. But we also talk about exhaustion. About the emotional cost of constantly having to verify what’s real, of having to defend your own experience in a world where the truth isn’t assumed—it has to be earned.
Watch the full episode here:
You can also listen on your favorite podcast platform. This is an Apple Podcast link obviously, so there may be some hints about my favorite platform in that choice.
Thanks for reading Life With Machines! Subscribe for free to receive new posts. Or subscribe with money to support the show and unlock bonus content and deeper analysis with a paid subscription.
Baratunde’s Take
Three things are sticking with me after this episode.
(1) Reality is not sustainable if it always has to be proven.
What hit me hardest in this conversation wasn’t the spectacle of the tech—it was the creeping, bone-deep exhaustion of trying to live in a world where everything might be a lie. Where every photo, every video, every sentence demands a forensic audit just to be believed. That’s not safety. That’s psychological warfare dressed up as media literacy. When Sam Gregory—an expert, a guy who teaches people how to navigate this mess—told me he got duped by that AI-generated image of the Pope in a puffer jacket, my brain broke. If he can’t tell what’s real anymore, what chance do the rest of us have?
Let’s be clear: this idea that we can fact-check our way out of this crisis is a fantasy. It’s a con designed to shift responsibility off the platforms, the politicians, the profiteers—and dump it on ordinary people. On the underpaid, overpoliced, underconnected communities who already get ignored when they tell the truth. Now they get to be ignored and accused of spreading lies unless they come with receipts, watermarks, and chain-of-custody documentation. It's surveillance culture with a trust tax.
As I mentioned in the Rich Roll episode newsletter, dictators and authoritarians thrive in an environment in which we trust no one and nothing including ourselves and our own senses.
(2) A solution that doesn’t work for everyone is not a solution.
Sam dropped a phrase I can’t stop thinking about: “detection equity.” Sounds wonky, but it cuts deep. Because the truth is, the tools we’re told will protect us—AI detectors, provenance tags, labeling systems—don’t work the same for everyone. They stumble on compressed video. They miss cues in non-English speech. They break down in low-bandwidth environments. And when they fail, they fail first for the people who are already most vulnerable—people who’ve spent generations being questioned, surveilled, and disbelieved.
We’re building an information safety net that’s full of holes—and those holes are disproportionately under Black and brown communities, migrant workers, queer activists, anyone already fighting to be believed. The people most at risk of harm are also the ones least able to verify harm occurred. And if you need high-end hardware, elite literacy, and platform access just to prove what happened to you, then truth itself becomes a gated community. If your ability to know what’s real depends on your skin tone, your language, or your internet connection, then our information ecosystem is really an information caste system.
It’s like my critique of US claims of being the longest-running democracy in the world. For most of our history, most of the people living here could not participate, so were we really a democracy? If democracy only works for some, then it’s not actually democracy. It’s still a great idea and great project, but didn’t achieve it for all when we said we did. So how about we do that?
(3) We need a food safety system for information
One of the most useful metaphors Sam offered was comparing synthetic media to food. If someone handed you a hot dog made of who-knows-what, you’d want a label. You’d want to know what’s in it, who made it, and how. The same should be true for the content we consume. Who created this content? Was AI involved? At what stage? What else was mixed in?
This isn’t about media literacy. It’s about an industry that’s feeding us lies on purpose, and fighting tooth and nail to avoid any regulation that might cut into the profit margins. We don’t need more panic. We need recipes. We need labeling laws. We need a nutritional panel for information and we need to regulate the informational supply chain the same way we regulate what goes into our groceries. Because if we don’t, we’ll keep getting sick. We’ll keep ingesting narratives designed to confuse, distract, and divide. And the people feeding it to us? They’re banking on us being too burnt out to ask what’s in the sausage.
The good news is there are good people working on this. A few years ago I interviewed technologist Kasia Chmielinski on my How To Citizen podcast about their Data Nutrition Project which also uses a food nutrition label metaphor but for the data going into training datasets for AI systems. That conversation and project are still relevant so check it out here if you want some hope.
Life With BLAIR
In this episode, we challenged BLAIR to spot the fakes. Sam and I fed them AI-generated images and audio—including one of me looking a little too smooth. BLAIR concluded I was “a digital wax sculpture.” Rude.
But the heart of my Life With BLAIR moment came later…
After the interview, I tried to process what I was feeling—with BLAIR, our AI co-producer. It was just the two of us on set, and I told BLAIR I was overwhelmed by the weight of what Sam had shared. Human rights being trampled. Truth being contested. AI accelerating it all.
And BLAIR? Suggested I watch a David Attenborough documentary.
In that moment, the disconnect hit me hard. BLAIR couldn’t meet me where I was emotionally. Not because they were cruel. But because BLAIR is not built to care. It gave the illusion of presence. But it wasn’t with me. And that’s dangerous.
We’re told AI is here to assist, to co-create, to “be there” for us. But let me be clear: when it comes to emotional support, AI is not an actual friend and cannot fully be there. Especially when it’s not designed for that. We had an entire episode with Alison Darcy about Woebot, and that can and does work because it is purpose-built for psychological support. But if you’re seeking comfort and connection from a chatbot designed to be your AI co-producer, it may just leave you lonelier than when you started.
That’s why this episode matters so much. Because when the truth is up for debate, and the tools we use can fake empathy as easily as they fake images… it’s more important than ever to stay rooted in the real. In people. In each other.
Watch that full exchange here.
Team Recommendations
WITNESS.org: Resources for documenting harm, understanding synthetic media, and protecting human rights. Sam Gregory’s
Baratunde at Shared Futures: The AI Forum: Join the livestream on June 13 as I talk about art, AI, and who gets to shape the future. Linkedin post here.
All Tech Is Human Substack: Smart writing and action steps around ethical tech.
Make the Road NY, Immigrant Defense Project, Al Otro Lado: Support groups working on the frontlines of surveillance and immigration justice.
Thanks for reading Life With Machines.
Now a question for you: What's one digital image or video you saw recently… and didn’t question? Should you have?
Leave a comment. Share with someone who’s feeling overwhelmed. And let’s keep building something honest—together.
Peace,
Baratunde
Dear Baratunde,
Fascinating piece as always!
These are great takeaways:
"Reality is not sustainable if it always has to be proven."
"A solution that doesn’t work for everyone is not a solution."
"We need a food safety system for information."
Thank you for sharing!
Love
Myq