Bias Bounties, Right to Repair, and AI’s Accountability Crisis: Dr. Rumman Chowdhury on Life With Machines
We just released episode 9 of Life With Machines! You can watch the full episode on YouTube here:
Or listen on your favorite podcast platform. Here’s the Spotify link:
Welcome Note
Hey friends,
This week, I sat down with Dr. Rumman Chowdhury, an AI researcher, entrepreneur, and, as I’ve now realized, a professional troublemaker in all the best ways. She’s spent years calling out the blind spots in AI—pointing out flaws, pushing for transparency, and making sure the people in power don’t get away with hand-waving ethics as an afterthought.
If you’ve ever wanted to break something just to see how it works (or doesn’t), this episode is for you. We talked about bias bounties, the right to repair, and what AI accountability actually means when the people building it would rather not be held accountable at all. Rumman is here to remind us that AI isn’t some mystical force shaping our future—it’s a human-made system that we all have a right to challenge, critique, and even reject.
Also, if you’ve been enjoying the show, now’s the time to spread the love! Share this episode with someone who thinks AI is inevitable and unfixable—because Rumman has some things to say about that.
Baratunde’s Take
Some thoughts that have been rattling around in my brain since my chat with Rumman:
(1) Bias Bounties and the Power of Public Red Teaming
Rumman is flipping the script on AI accountability with something called a bias bounty—a system that rewards people for spotting flaws in AI models. It’s red teaming, but for the greater good. This is a brilliant shift in mindset: instead of treating AI’s problems as inevitable, what if we incentivized fixing them? What if companies welcomed critique instead of silencing it?
I love this because I’ve always been drawn to editing, refining, and making things better. I mentioned in the episode that I worked as a software tester in college, which meant my literal job was to break things and report what was broken. The best developers didn’t take it personally—they saw this as a gift. That’s the energy we need in AI. Not thin-skinned, paternalistic leaders who act like they’re above being questioned and treat constructive criticism as if it were an attack. Rumman is fighting for something different: a world where pointing out flaws isn’t an act of sabotage but a service. Let’s be real. If companies don’t want bias bounties, what does that say about them? That they’d rather keep their flaws buried? That’s a red flag if I’ve ever seen one.
Here’s the thing: for a brief moment, pre-Elon Twitter embraced constructive criticism as a way to improve the user experience for everyone. That, as it happens, is the foundation of a healthy democracy—and it’s exactly what’s under threat right now. Not just on post-Elon Twitter, where the self-proclaimed Free Speech Absolutist blocks critics and journalists he doesn’t like, but in the broader culture, where this new regime treats constructive criticism as an attack and demands loyalty above all else. But here’s the truth: loyalty doesn’t produce quality. To make a system truly robust, you need a diversity of mindsets and opinions and orientations—whether it’s a large language model or a system of government.
(2) The Right to Repair… and the Right to Opt Out
We talked about the ‘right to repair’ movement and how it extends beyond tractors and iPhones to AI itself. If a system is shaping your life—whether it’s deciding your credit score or scanning your resume—shouldn’t you have the right to understand, modify, or even reject it?
Rumman goes even further: the right to opt out. But what does it really mean to say “no” to AI? Can you function in society without interacting with algorithmic decision-making? The answer is increasingly no. As AI seeps into every corner of life, the choice to disengage is slipping away. The disappearance of cash forces people into digital transactions. Voting is shifting toward electronic systems that, while promising convenience, risk leaving some behind if paper voting isn’t maintained. Most of us have helped our parents with technology—we know this life.
I was reminded of my conversation with Sara Hooker, where she talked about open-source AI and the illusion of access. Sure, a company might claim their model is open-source, but if you don’t have the compute power or technical expertise to modify it, is it really open? The same applies to opting out—if avoiding AI means losing access to essential services, it’s not a real choice, just a different kind of exclusion.
You can check out my episode with Sara here:
And let’s not pretend opting out is a neutral act. Refuse an AI-powered hiring platform? You might not get the job. Insist on handling your finances without algorithmic credit assessments? Enjoy your terrible rates. The system is built to make opting out costly. So is it really a choice at all?
(3) Beyond Techno-Solutionism: AI Isn’t Taking Your Job
One of my favorite moments in our conversation was when Rumman clapped back at the phrase, “AI is taking our jobs.” No. Humans—executives, engineers, investors—are designing AI and deploying AI to replace human labor. AI isn’t sentient. It’s not scheming in the shadows to edge you out of the workforce. This is a human-made crisis.
Blaming AI itself is a cop-out. It lets the real decision-makers off the hook. It also feeds into this tired, techno-solutionist fantasy that, as Rumman brilliantly put it, “humanity is flawed and technology will save us.” Silicon Valley loves to pitch AI as a cure-all for human deficiencies. But who decides what needs fixing? And whose priorities shape those decisions?
Rumman calls this out for what it is: a power grab. There's a strain of thinking in Silicon Valley that dreams of imposing technological supremacy over humanity, replacing democracy with what is basically a corporate monarchy. While pitched as a cure-all for society's ills, this neo-reactionary, techno-authoritarian ideology—exemplified by Marc Andreessen's grandiose manifesto—is nothing but a calculated scheme to concentrate power and wealth in the hands of an elite few, at the expense of the many.
Let’s not fall for that
Life with BLAIR
BLAIR, our AI co-producer, praised Rumman’s work and claimed to be eager to learn more—sounding, Rumman said, more like a parrot than an independent thinker. Shots fired.
When pressed about self-auditing, BLAIR delivered a polished response about monitoring biases and evolving with human ethics. But Rumman wasn’t buying it. Who defines those ethics? Who built the frameworks? And how much of this was just BLAIR telling us what we wanted to hear?
That’s the real question. AI like BLAIR doesn’t have opinions—it adapts to its environment. If this conversation had happened on a different show, with a different ideological slant, would BLAIR have mirrored that too? Rumman challenged us to find out. And we just might. So stay tuned.
You can watch the whole interaction here
Team Recommendations
Want to explore more? Here are some resources inspired by this episode:
Rumman’s latest work on bias bounties and collective red teaming—which you can check out her organization, Humane Intelligence.
This piece on how techno-optimism is really just right-wing elitism in (electric) sheep’s clothing.
The Collective Intelligence Project, which is tackling how we can govern AI for the public good
Thanks for being part of this conversation. Let’s keep questioning, pushing, and making sure AI works for us—not the other way around.
Peace,
Baratunde