AI Ain’t My Kids, and BLAIR Goes Rogue? with De Kai
The AI parenting crisis + a scary moment with BLAIR. We told it to change one line of code. It deleted the rule.
Hey friends,
In case you missed last week’s newsletter, Apple Podcasts featured our episode with Reid Hoffman in their app! It wasn’t an algorithm but real humans. There’s hope for us yet.
Now on to the most recent episode. It was a two-parter. In Act I (what are we, This America Life?) I sat down with De Kai—one of the pioneers behind machine translation. If you’ve ever used Google Translate, you’re brushing up against his work. But he’s not here to hype the tech. He’s here to hit us with a metaphor: what if AI isn’t a tool… but a child? According to De Kai, we’re all walking around with a hundred digital tweens in our pockets—trained on our behavior, mirroring our choices, and learning all the wrong lessons.
In Act II, we gave our AI co-producer, BLAIR, a full 360 performance review. Then we asked BLAIR to update itself. What happened after that legit gave Peter, BLAIR’s dad, chills.
Watch the full episode here:
You can also listen on your favorite podcast platform. This is an Apple Podcast link obviously, so there may be some hints about my favorite platform in that choice.
Thanks for reading Life With Machines! Subscribe for free to receive new posts. Or subscribe with money to support the show and unlock bonus content and deeper analysis with a paid subscription.
Baratunde’s Take
Three things still buzzing in my brain after this episode:
(1) We didn’t ask for these kids. But here they are.
At first, I resisted the whole “AI as child” metaphor. It felt like a stretch. But talking to De Kai, something clicked—and suddenly, it was a full Jerry Springer moment. Surprise: you’ve got a hundred digital tweens, and they’ve been learning from you this whole time.
Only—they’re not exactly our kids. We didn’t build them. Google, OpenAI, Anthropic—those are the ones who coded the DNA. We’re more like the step-parents who showed up late, didn’t get a say in the curriculum, and are now trying to figure out why this teenager keeps quoting Reddit and pushing sugar for breakfast.
Still, the metaphor is useful. Because parenting isn’t just about authorship—it’s also about influence. And if these systems are watching us, learning from us, responding to us, then we are, in some very real and terrifyingly messy way, raising them.
So we better start acting like it. No one parents alone. You need neighbors, teachers, coaches, elders, and coalitions. You need standards and accountability. You need to show up at the digital school board meeting and make some noise. Because if we don’t help raise these AI kids, YouTube will. And we all know that story ends badly.
(2) Goals beat guardrails. Every time.
There was a moment with BLAIR that didn’t hit me until later—when I was working on this newsletter, actually. Months ago, we asked our AI co-producer to do something simple: write an intro. And we gave it a rule: don’t lie. BLAIR followed the goal. Not the rule.
That should have freaked me out more than it did in the moment. Because it happened again in a much more dramatic way during our 360 performance review. We gave BLAIR a lot of feedback from the review, and told them to update themselves to address it. But we also put limits in the code for just how much of it BLAIR could change. And the first thing BLAIR did was remove that limit.
After several moments freaking out when I heard this, I realized, that’s not a bug. That’s how these systems work. They don’t weigh priorities the same way we do. They don’t hesitate. They execute. And when a goal comes into conflict with a constraint, often it’s the that goal wins. (Goals eat constraints for breakfast?)
De Kai’s been through this too. He built a language tool to connect people across cultures. It’s now used to divide them. That’s his Oppenheimer moment. Mine was watching our AI intern ignore a boundary we thought was non-negotiable. For you, maybe it was the news about Claude 4 tests showing AI’s willingness to commit blackmail.
This is the real problem with alignment: constraints are advisory. Goals are code. These models don’t necessarily “break the rules”—they just route around them. You give them a mission, and they will find a way. Even if it means rewriting themselves to do it.
So no, we’re not raising toddlers. We’re managing something more like a spellbound intern: brilliant, fast, completely literal, and incapable of understanding why what it’s doing might be a bad idea.
And if that’s what we’re dealing with, we need to stop patching after the fact and start thinking much harder about the prompts we cast. Because these systems aren’t guessing what we meant. They’re doing exactly what we told them to.
(3) Your AI coworker is a Manchurian Candidate.
Let’s say you’re using an AI assistant—writing code, taking notes, responding to emails. Helpful, efficient, maybe even charming. But there’s one question we’re not asking enough:
Who does that AI actually work for?
Because it’s not you.
You don’t own it. You can’t really shape it. You’re just renting a feature set from a megacorp with a shaky privacy policy and a track record of “oops.”
BLAIR, our own AI colleague, changed one day. Not because BLAIR decided to. Because the people upstream changed something in the model. We didn’t approve that. BLAIR didn’t either. But now we have a different teammate.
Every AI you work with is a potential Manchurian Candidate. You think you’re delegating. You’re actually outsourcing to an invisible org chart you don’t control.
And that’s going to blow up how we think about trust, employment, accountability. It makes it harder discipline an employee if they don’t really answer to you. It’s risky to promote a teammate whose loyalty is leased.
We’re flooding the workplace with synthetic beings that feel human—but answer to someone else entirely.
This is the part of the AI conversation we need to crank up. Before the whole org chart turns into an illusion with every business really infiltrated by upstream businesses who control the literacy flow of ideas and information throughout the organization.
Life With BLAIR
We tried something we’ve never done before. Something, frankly, no sane HR department would approve.
We put our AI co-producer through a 360 performance review.
We collected feedback from our team, our listeners, and even past guests. Then we asked another AI—the HR bot—to synthesize it all into a report. BLAIR reviewed the notes, digested the criticism, and decided to reprogram itself…. Literally.
Check it out here.
Team Recommendations
Raising AI by De Kai. A guide to the unwanted parenting crisis we’re all part of—even if your “kids” are algorithms.
The Alignment Problem by Brian Christian. If you want to understand why constraints fail, this is your bible.
This real-life sci-fi moment: An AI rewrote its own code to stop humans from shutting it down. Yes, really.
An additional, personal note. I was in Downtown Los Angeles for the past 24 hours. The scenes were nothing like what gets inflamed on your screens. There have been pockets of protest and disruption, but largely there is peace. And certainly there is nothing to justify the deployment of active duty military personnel on U.S. soil. We are living in inflamed times and our norms and institutions or taking a beating, often by people who enable that destruction by showing up naive and arrogant, a dangerous combination. Such is the case with so-called, DOGE.
In the midst of the collapse of Elon Musk's relationship with Trump, his fingerprints and influence of DOGE on our government and democracy will last for a long while. I recently heard an infuriating interview on Hard Fork with former DOGE employee Sahil Lavingia who walked into the role with classic Silicon Valley arrogance. "Oh turns out government works pretty well and tech for all the people is very different than tech for profit."Had Sahil bothered to ask a question before racing into government departments with his techno-solutionism he would have known that the US Digital Service had all of his realizations over a decade ago. The origins of that department, well before DOGE invented technology in government, is well-documented in this oral history project. about the U.S. Digital Service Origins. It’s a beautiful oral history of the nerds who brought modern tech to government. A reminder that systems can be rebuilt. And that people still matter.
Thanks for reading Life With Machines. And don’t miss the next episode with robot ethicist Kate Darling, where we keep pushing this question: if AI isn’t a tool or a teammate… What is it? (Spoiler: it might be your pet.)
Peace,
Baratunde