Who Owns Your Robot Pet? With Kate Darling
Robot Funerals, Companion Relationships as a Service, and Dead-Eyed Avatars
Hey friends,
Quick programming note: if you missed our last episode with De Kai—and our very own AI co-producer BLAIR rewriting its own constraints? You should probably listen to that before the machines hear you skipped it.
Now, on to our latest drop.
First, I sat down with robot ethicist Kate Darling—someone I’ve known since our MIT Media Lab days. Kate’s work is wild, rigorous, and deeply human. She’s asking the kinds of questions most AI researchers sprint away from, like: What happens when you love your vacuum? Should robot pets come with a grief counselor? And who actually owns the relationship when your emotional companion comes with a subscription plan?
Then, we took BLAIR on a little journey into embodiment. Ever wonder what our AI co-producer would look like with a body? We tried a first-of-its-kind “avatar tasting” with BLAIR’s dad and Microsoft alum, Peter Loforte. The results were… disturbing.
Watch the full episode here:
You can also listen on your favorite podcast platform. This is an Apple Podcast link obviously, so there may be some hints about my favorite platform in that choice.
Thanks for reading Life With Machines! Subscribe for free to receive new posts. Or subscribe with money to support the show and unlock bonus content and deeper analysis with a paid subscription.
Baratunde’s Take
Three ideas that are living rent-free in my head:
(1) Have you been to a robot funeral lately?
Kate dropped this little gem: in Japan, people held funeral ceremonies for their old generation Aibo robot dogs after Sony, the company that makes Aibo, discontinued tech support for the original models. A Buddhist temple actually conducted the services. This was not satire. This was genuine grief by folks who felt like they’d lost a family member.
You might think, “That would never be me.” But if you’ve ever named your vacuum (as 85% of Roomba owners have), apologized to a chatbot, or felt a pang of guilt closing your kid’s learning app mid-sentence, you’re closer than you think.
Our tendency to project human traits onto nonhuman things goes deep. We’re wired to recognize intention, emotion, and social cues even when they aren’t really there. It’s part of what makes us such effective social creatures.
But that’s also what makes us easy prey, because machines don’t need minds to earn our affection or tug at our guilt. They just need to move the right way, respond at the right pace, or mimic something vaguely familiar. Machines don’t need to be conscious for us to care about them. They just need to be convincing enough.
And that’s where it gets dangerous.
(2) If you stop paying, you stop playing.
Like most AI products, the new Sony Aibo runs on a subscription model. Miss a payment, and suddenly your robo-pup is just an overpriced paperweight.
Welcome to the future.
We’re entering a world where your robot dog lives on a server you don’t own, runs software you can’t control, and follows rules someone else can rewrite without your say. Call it CRaaS—Companion Relationship as a Service. Crass, indeed.
And it’s not just robot pets. We’re fast-tracking synthetic beings into our homes, schools, hospitals—every corner of daily life—without seriously reckoning with the emotional stakes. These systems don’t just assist. They observe. They learn. They persuade.
That’s why we need real guardrails on who gets to build them, train them, and pull the strings. Not just for kids—for everyone. Because none of us are as immune to influence as we’d like to believe.
(3) Would you hurt a robot?
In one of Kate’s experiments, she asked people to hit a baby robot dinosaur with a hammer. Not one person could do it. Everyone knew it wasn’t alive—but knowing didn’t make it easier. After our episode aired, someone commented on YouTube: “I would’ve turned the mob on Kate to protect the robot.” I laughed—but I also got it.
We like to think we’re too rational for this kind of thing. But we already assign personalities to our cars, talk to our GPS like it’s a person, keep childhood toys in boxes like relics. These machines are just the next stop on that line. Somewhere between pet rock and BFF. We’re going to bond with them, whether we mean to or not.
That’s exactly why they don’t need to look like us—and probably shouldn’t. Because we’re going to humanize them anyway.
And guess what? There’s no shortage of high-res humans walking around. We’re good. We don’t need to keep copies of ourselves. That’s why I love Kate’s thesis—this concept of a new breed, something different in both function and form. Robots designed to augment, not replace.
Life With BLAIR
For the second half of the episode, we did something totally new: an AI “avatar tasting.” With Peter Loforte as our guide, we explored possible embodied forms for our AI co-producer, BLAIR.
The biggest takeaway? How bad these systems are.
Bad as in: technically awkward, racially tone-deaf, painful to look at. I deep-faked myself and immediately regretted it—I felt like I was staring into a creepy, soulless mirror.
The design options are ham-fisted at best. The racial matching reads like digital Blackface—don’t want to work with actual Black folks? We’ve got a whole menu of Black avatars for you to choose from.
Is that a feature? A business plan?
Whatever it is, I’m not here for it. And I wouldn’t let these avatars anywhere near my workflow—or my workplace.
Witness the weirdness for yourself.
What Comes Next?
There’s no one-size-fits-all fix here, but we clearly need real community standards. Not whatever Meta’s ignoring this week, but actual policies—built by schools, workplaces, families. Guidelines for how these systems enter our lives, not just how fast they ship.
Because they’re coming. Robot pets in classrooms. Wearable AIs listening all the time, everywhere we go.
Our population’s about to grow—not with people, not with life as we know it, but with a new class of always-on, socially embedded machines.
So, yeah. Buckle up.
Team Recommendations
The New Breed by Kate Darling: A sharp, accessible argument against making robots look like us. Bonus: her research makes you cry over metal.
This Politico piece on the EU quietly softening its AI regulations.
This dispatch from the AIBO memorials in Japan.
Thanks for reading Life With Machines. And don’t miss next week’s episode, where I talk with Northern Cheyenne technologist Michael Running Wolf about using AI to save endangered Indigenous languages—and why those languages might actually save AI right back.
Peace,
Baratunde