The AI Efficiency Trap: Sara Hooker on Life With Machines
Our latest guest pretty much predicted DeepSeek before any of us had heard of it!
We’ve just dropped episode eight of Life With Machines—and it’s all about how today’s AI models are failing us. In it my guest said, “Anyone who is serious about what the next generation of models is knows it can't be the current" ones.
This couldn’t be timelier given the rise of DeepSeek, a Chinese AI startup shaking up the industry with a powerful open-source model that allegedly rivals OpenAI and Google at a fraction of the cost, by prioritizing reasoning and efficient software methods over sheer computational muscle and copious piles of cash. Also I think it’s personally and massively hilarious that OpenAI is complaining that DeepSeek may have “stolen their IP” to build their model. There are zero tears in the artist community for this unbelievably unserious complaint by the biggest copyright thieves in history. Anyway…
Watch the episode with me and Sara on YouTube here:
Or listen on your favorite podcast platform. Here’s the Spotify link:
Welcome Note
Greetings from Westlake Village just outside Los Angeles! Our studios are here in the incredible Voicing Change Media (launched by Rich Roll) studio space and as I write this I’m looking west toward the sunset in the midst of a week of consequential tech and political activities in the United States. The Trump administration announced Project Stargate, unleashed ICE raids across the country, shut down billions in federal spending, and reacted to the successful launch of Chinese-based AI startup, DeepSeek.
Meanwhile in the studio this week we’re having a series of consequential conversations about AI and its impact on youth (Dr. Avriel Epps), human rights (Sam Gregory from Witness), climate (with Gavin McCormick), and liberty and justice (with Van Jones). Stay subscribed and tuned in for all these upcoming gems.
But this week we dropped my episode with Sara Hooker, whose work challenges us to rethink how AI systems are designed and deployed. She leads Cohere For AI, a research lab pushing boundaries in machine learning, from multilingual models to fairness-focused frameworks.
In our conversation, we tackled some of the most pressing questions in AI today. What happens when efficiency is valued over fairness? How can we preserve cultural sovereignty in a world where large language models are dominated by English? And what do we do when our AI coworkers (looking at you, BLAIR) confidently lie?
Sara’s journey from Mozambique to Silicon Valley, with a detour through garbage collection as a childhood obsession (yes, really), is a testament to persistence and vision. This episode is a rich exploration of the trade-offs shaping AI and the broader societal shifts we’re living through. I hope it sparks as many questions for you as it did for me.
Also, if you’ve been enjoying the show, now’s the time to spread the love! Share this episode with someone who’s curious about where technology is taking us—and who's at the wheel.
Baratunde’s Take
Here are three ideas that have been simmering since my chat with Sara:
(1) Efficiency Isn’t Neutral
Sara shared an eye-opening story about one of her early research papers that was rejected FOUR times. She found that making AI models faster and cheaper often ends up silencing people on the margins. The problem is pretty straightforward: when you streamline models by cutting complexity and pushing for speed, you lose the ability to pick up on rare cases and outliers. Sure, the model runs faster, but it gets more biased in the process. This is also important because most of the “AI is biased” conversation focuses on bias in the training data, not in model optimization.
This all went down before the current backlash against DEI and CRT—in what was supposedly a neutral, apolitical tech industry. But Sara's experience shows that these biases have been baked into the tech world for a long time. The same mindset that puts speed over fairness in AI is now driving bigger movements to strip diversity and equity from schools, government, and hiring.
As we face this new wave of attacks on freedom and fairness—from book bans to the literal rollback of Civil Rights-era anti-discrimination rules—Sara’s work feels as important as ever, highlighting the need for, among other things, new metrics in AI. If a system is “efficiently discriminating,” then it’s not efficient at all from a societal perspective. Redefining what we mean by efficiency is a key step toward building systems that prioritize well-being, fairness, and inclusion over pure quantifiable output. The systems we build reflect the values we choose—so let’s make better choices. Feel me?
(2) AI and the Homogenization of Culture
One of Sara’s most ambitious projects is developing multilingual models that preserve the nuances of 101 languages. Why does this matter? Because language isn’t just a tool for communication—it’s a vessel for culture, identity, and sovereignty. When we consolidate language models around a handful of dominant tongues, we risk erasing the world’s cultural diversity in favor of a homogenized, “Instagram-ready” aesthetic.
This flattening of culture isn’t new. Global trends—from urban design to social media—often promote a single, bland vision of what’s aspirational. Think about how Instagram has shaped design tastes, creating a universal “look” across the globe. The British are even noticing the loss of their British English accents in favor of a more neutral accent. AI models could take this even further, not just flattening taste but erasing linguistic and cultural distinctiveness altogether.
Cohere’s work pushes back against this trend by preserving the unique worldviews embedded in language. It’s a bold and necessary act of resistance against the digital colonization of culture. As Sara put it, protecting language is about more than words—it’s about preserving what makes us human.
(3) Why You Can’t Trust Your AI Coworker
Let’s talk about BLAIR. During this episode, our trusty AI co-producer confidently claimed we’d hosted one of Sara’s friends, the famous AI researcher Timnit Gebru as a guest. We hadn’t. Sara laughed, but it raised a serious question: What happens when AI teammates lie?
In non-critical contexts, hallucinations like BLAIR’s are annoying at worst. But when AI tools enter workplaces—handling sensitive data or making decisions—the stakes get much higher. If your AI coworker confidently fabricates information, who’s held accountable? Who pays the price for its mistakes?
This reminds me of a two-year-old who hasn’t yet developed a sense of morality. It’s a cute analogy, but the implications are real. As AI moves from tool to teammate, we need systems in place to verify, arbitrate, and hold these “coworkers” accountable. Until then, human oversight remains essential—even for tasks as simple as summarizing a podcast episode.
Speaking of accountability, we are about to do a 360 review of BLAIR to give them a chance to improve themselves and their own code. If you have thoughts or feedback based on what you’ve seen, please reply to this post via email or Substack comment, and we’ll feed that into our digital colleague.
Life With Blair
What happens when AI errors move from quirky to consequential? I asked BLAIR about our past guests that have challenged them the most. You can watch it here.
Team Recommendations
Want to explore more? Here are some resources inspired by this episode:
Sara Hooker’s paper on how increasing the efficiency of AI systems can make them more biased
This article on France's ill-fated (and kind of hilarious) attempt to develop an AI chatbot embodying European values
This study on how AI writing suggestions push non-Western users toward Western writing styles
Thanks for being a part of this journey with me. Take care of yourselves, take care of each other, and keep standing up for what matters.
Peace,
Baratunde