The Machines Are Learning—But Are We? with Inflection's Sean White
DOGE, Agentic humans, and emotional relationships with smart toasters??
We just released episode 10 of Life with Machines and it’s a banger! You can watch and comment on the full episode on YouTube here:
Or listen on your favorite podcast platform. Here’s the Spotify link:
Welcome Note
Hey human and machine allies,
There’s an audio version of these thoughts I recorded while literally walking past US federal buildings being emptied out by Elon Musk and DOGE. We’ll generally be reserving access to this for our paid subscribers, but we thought to give you a taste here.
This week, I’m writing to you from D.C., walking through the heart of the city while reflecting on the so-called Department of Government Efficiency (DOGE) and this push to build tech that supposedly makes the government run better. Smells like BS to me. It’s not about efficiency—it’s about control. He who controls the server controls the world. I don’t think that’s a real quote, but give it time. What we’re watching isn’t just bureaucratic restructuring; it’s an attempt to consolidate power under the guise of optimization. So yeah, I’m keeping an eye out for the DOGE bags (can we get that trending), and the ways these efficiency schemes really serve as power grabs.
Meanwhile, my conversation with Sean White of Inflection AI in episode 10 of Life with Machines got me thinking about a different kind of control—the kind we’re handing over to AI, often without realizing it. We talked about emotional intelligence in AI, agentic systems, and what it means when we start forming relationships—not just with chatbots, but with all the tech that surrounds us. AI isn’t just a tool anymore; it’s creeping into spaces where we expect human connection. And yet, we still struggle to make human systems emotionally intelligent. If AI can learn empathy, why can’t our institutions?
This episode challenged me to rethink not just how AI is designed, but how we shape the systems around it. Because if we’re not careful, we’ll build emotionally intelligent machines while running an emotionally bankrupt society. And that’s a future I’d rather not automate.
Baratunde’s Take
A few thoughts inspired by my conversation with Sean:
(1) Emotional Intelligence for AI—Helpful or Harmful?
Sean made a strong case for AI with emotional intelligence—not just because it sounds good in a pitch deck, but because people perform better when they feel better. And feeling better isn’t just about perks or culture—it’s about emotional regulation. Being calm, being mindful, being present. If AI is going to be embedded in our work environments, shouldn’t it help us function better, not just work faster?
But here’s the flip side: if we’re offloading emotional labor to AI, what happens to our emotional intelligence? If bots are scheduling meetings, writing follow-ups, managing client relationships—what’s left for us to do? We already know social media has eroded face-to-face social skills. We saw what happened to students who spent formative years in isolation. We’ve got evidence that too much digital mediation makes human connection harder, not easier.
So maybe Sean’s right—maybe the best version of AI in the workplace is one that helps us regulate, helps us engage, helps us work better. But we should also be asking: what’s the cost of letting AI handle emotions we should be handling ourselves?
Not only that, everything is political. Everything is filtered through who’s in power, and the folks in power right don’t like these words. Social-emotional learning. Diversity. Equity. Inclusion. Critical. Belonging. Nice. I literally think they are opposed to people being nice. They’ve decided that fostering connection and empathy is a threat somehow.
So while companies like Inflection AI are building systems designed to bring more emotional intelligence into the workplace, the entire executive branch of the United States government is pushing the opposite agenda. They’re making it legal to discriminate again—dressing it up as “merit.” It’s a cold, calculated distortion, and it’s happening in real time. Nothing escapes the culture wars. And now these wars are being fought through technology. So what happens when an AI system is built to promote the very values they’re working to erase? How does it survive in this climate? That’s what I’ll be watching.
(2) Agentic AI vs. Agentic Humans
Agentic AI is the buzzword du jour in the AI world. It refers to AI systems that can act on their own, make decisions, execute tasks without constant oversight. I’ve been watching demos, reading up on the hype, and thinking about what this actually means. And while all this energy is being poured into making AI more autonomous, I can’t help but wonder: where’s the push for human autonomy?
If we just swapped out "AI" for "humans" in these conversations, we might actually get somewhere. Agentic AI? Cool. How about agentic humanity? How about systems that make us stronger, more capable, more free? Instead, we’re automating away decision-making, turning workflows into black boxes, and watching as AI creeps further into roles that once required human skill, judgment, and creativity.
Yes, it’d be great if AI could handle more complex tasks. But it’d be even better if we could. If people had more agency over their work, their choices, their futures. How about we stop designing tech that makes individuals more dependent and instead build systems that make us more independent or even interdependent? Agentic AI is coming—but if we don’t fight for agentic humanity, we’ll wake up one day and realize we optimized ourselves out of the equation.
(3) Do I Want a Relationship with My Robot Vacuum?
Do I want an emotional relationship with my vacuum? With my smart speaker? With my fitness tracker? This circles back to the first point—the business value of AI having emotional intelligence. But I don’t know.
We’ve explored emotional connectivity on this show a lot—Thrive AI Health using behavioral nudges, Pi and its attempt at cultivating kindness, and in an upcoming conversation, Alison Darcy of Woebot, whose AI helps people manage anxiety and depression. Those are chatbots. Their entire function is conversation, interaction, and emotional support. But AI is moving far beyond that.
Our future Life with Machines isn’t just going to be about talking to AI. It’s going to be every object we interact with. It’s already in our watches, our phones, our TVs, our speakers. And those of you with kids? You already know the kind of relationship young children have with Alexa.
So we need to rethink emotional intelligence in AI—not just in chatbots, but in the AI embedded in our everyday objects. What does it mean to have an emotional relationship with your car? With your mirror? With your weightlifting system? And of course with your future humanoid robot colleague? These things aren’t just tools anymore; they’re becoming interfaces, surface areas for AI interaction.
And in a way, maybe this isn’t even new. I’ve had emotional relationships with things my whole life—my car, my childhood toys, that one perfectly broken-in hoodie I refuse to throw away. A kid and their stuffed animal? That’s an emotional bond. But what happens when Tickle Me Elmo or Teddy Ruxpin or the American Girl doll doesn’t just speak but actually understands? When it remembers, when it adapts, when it responds to you like another person would? Is that what we really want? How can this capability benefit us and not just be leveraged to extract our time, attention, and financial resources?
Life with BLAIR
BLAIR had some thoughts this episode. Our AI producer decided to throw a curveball at Sean White, cutting past the usual AI shop talk and getting straight to the big question: Can AI actually inspire human creativity? Not just assist, not just replicate—but push us to make something we never would have otherwise?
Sean had receipts. He broke down how AI is already acting like a creative collaborator, giving people new ways to express themselves. He even described AI as a “calculator for the soul”—a tool that doesn’t replace human creativity but expands access to it.
You can watch the whole interaction here.
Team Recommendations
Want to explore more? Here are some resources inspired by this episode:
This study on the ethical issues and unintended consequences of deploying emotional AI technologies designed to sense, recognize, influence, and simulate human emotions.
This article on how allowing AI to become the default decision-maker can have unintended consequences on human decision-making processes, potentially diminishing human agency.
This piece on agentic AI systems capable of autonomous, goal-oriented decision-making.
Thanks for being part of this conversation. Let’s keep questioning, pushing, and making sure AI works for us—not the other way around.
Stay human,
Baratunde