I'm with Jeff -- Why AI Isn’t Intelligent 🤯
What is AGI actually for, and how many Palm Pilots do I own??
Hey friends,
Last week on Life With Machines, we dropped my conversation with Jeff Hawkins, the inventor of the PalmPilot and a neuroscientist who has some pretty strong opinions on what intelligence actually is—and what it isn’t. Jeff heads The Thousand Brains Project and Numenta. According to Jeff, today’s AI isn’t intelligent. Not even close. And after our conversation, I’ve gotta say, I’m with him.
Watch the full episode here:
Or listen on your favorite podcast platform:
Thanks for reading Life With Machines! Subscribe for free to receive new posts. Or support the show and unlock bonus content and deeper analysis with a paid subscription.
Bonus Audio
I record a rough version of this newsletter in a voice memo. Basically it’s a podcast about the podcast in which I reflect more on my own thoughts and reactions to the discussion. We’ve been putting this behind the paywall but I wanted you to have a full flavor of what we’re playing with over in payolaville. Enjoy!
Baratunde’s Take
Some thoughts I’m still chewing on from my conversation with Jeff:
(0) Palm Pilots and Fanboy Moments
First off, I need you to know that I, Baratunde Thurston, officially own more PalmPilots than Jeff Hawkins, the device’s inventor, does. True story. I have a couple on the set of Life With Machines and another one chilling in my garage. Jeff? Nothing. Zero PalmPilots. Not even a single Handspring. It's like finding out George Washington didn't keep his own cherry tree or Constitution, which come to think of it—
(1) The Big Lie
Jeff made it pretty clear that what we call AI isn’t actually intelligent. It’s clever, sure. It can recognize patterns and generate convincing responses. But it doesn’t understand anything the way we do. And those so-called neural networks we’re always hearing about are nothing like the neurons in our brains. Not even close.
And let’s be real—the people hyping up AI as “intelligence” tend to have a lot to gain. I’m talking about the CEOs who are less Chief Executive Officers and more Chief Sales Officers. They’re running around telling global leaders, businesses, and even kids that this is intelligence. But Jeff’s not buying it. And neither am I.
Jeff isn’t trying to sell me something. He’s just telling it like it is. He’s saying LLMs are useful, sure, but the amount of data, compute, and energy we’re throwing at them is ridiculous. Our brains are vastly more efficient. Our beautiful brains build models of the world through movement and sensory experience, whereas AI systems operate through brute force data crunching. Maybe—just maybe—we’re barking up the wrong dendritic tree.
(2) Knowledge vs. Experience
Jeff has a clear sense of why we humans are here: to gather and preserve knowledge. And I get why that belief appeals to him—it’s elegant, crystal clear, purposeful. It’s got serious Asimov vibes. (It’s very Foundation series, “let’s stockpile all the knowledge before the empire eats it.”) But I don’t think that’s the whole story.
I think our purpose isn’t just to accumulate knowledge, but to experience things. The emotions, the physicality, the desires, the shame—the whole wild, beautiful, messy package. And not just experience it, but know that we’re experiencing it. Isn’t that what makes us special?
Plus, we don’t do it alone. We experience each other. We interact, respond, shape and are shaped by the people and the world around us. That web of life and experience means more to me than some tricked-out time capsule built to wow future aliens.
I get it, knowledge lasts. You can write it down, etch it into memory, bequeath it to the future. Embed it in future beings which truly are intelligent. Meanwhile, experience lives and dies with us. But maybe that’s the point? I mean, it was called The Jimi Hendrix Experience—not the Jimi Hendrix Knowledge—for a reason.
(3) Why Are We Doing This, Again?
Here’s the part that really stuck with me. Let’s say we actually pull it off and build artificial general intelligence aka AGI aka packets of green powder that give you the nutrients you need and can pass the Bar Exam easily. Machines that are faster, smarter, and more capable than us. Then what? Why, exactly, are we doing this?
Jeff made me think about how we’re not great at long-term planning. Humans are wired to respond to immediate threats — tigers jumping out of the bushes, floods, fires. But when it comes to slow-moving dangers like climate change or pandemics, we struggle. We don’t react with the same urgency. And now we’re pouring billions into creating AGI, convinced that we have to beat China, or Elon, or whoever else. The stakes are massive, but our thinking is short-term.
So much of this AGI race is framed as a competition. Build it first. Dominate the market. Be the one who gets there before anyone else. But when you actually ask why we’re doing it, the answers get pretty shaky. Are we trying to outsmart ourselves? Create a digital god? Or just win a game we made up?
But then what? As AI pioneer Stuart Russell famously asked: What If We Succeed?
If we really create super intelligence, it’s going to be humbling AF for anyone whose whole identity is wrapped up in being number one. If your self-worth is tied to dominance, well, brace yourself.
In other words, AGI is about to make us all… British?
They used to rule the world, and then, well, they didn’t. The sun stopped shining on the empire, and they had to find another reason to exist.
That’s the opportunity here. It’s going to be hard, but maybe this AI moment is the one that forces us to find a new reason to exist, a new truth. Maybe we’ll finally let go of the whole “king of the jungle” thing and embrace something older and more connected. Also there’s no kings in the jungle; nature is built on interdependence far more than simple dominance. We could aspire to be less rulers of the planet, more members of the community of life. That sounds truly intelligent to me.
Life with BLAIR
BLAIR, our AI co-producer, had a moment in this episode. After Jeff spent a solid stretch explaining why AI isn’t intelligent, I turned to BLAIR and asked how it felt about being called, well, dumb.
Check it out here:
Team Recommendations
Want to go further down this rabbit hole? Check these out:
Jeff Hawkins’ A Thousand Brains: A New Theory of Intelligence. If you’re curious about his brain-based model for AI, this is where to start.
This essay on what Isaac Asimov's Foundation series can teach us about AI
This video interview with Ben Buchanan, the top adviser on A.I. in the Biden White House, on the coming of AGI.
Thanks for being part of this conversation. Now a question for you: If AGI was invented tomorrow, what’s the first thing you’d ask it or the first thing you’d collaborate with it on? Drop a comment, share this with your most AI-curious friend, and let’s keep the conversation going.
Peace,
Baratunde
Yes! This. Thanks for verbalizing this perspective. I keep thinking that AI is not actually intelligent. And. Even if we could make it so, to what end??
Related to your final question, I continue to not know how to define AGI. All of the definitions I’ve heard are not testable or verifiable. So far, AI can integrate hearing and vision information perhaps similar to a human. Would AGI be able touch, feel, taste, and integrate that information? Can it understand individual human or animal personalities and integrate that? Is it equivalent to a deep scientific expert? I can’t tell. So maybe it’s another chief sales officer situation? A race to who can just say that they have AGI first?
dear baratunde,
great piece!
i love this: "I get it, knowledge lasts. You can write it down, etch it into memory, bequeath it to the future. Embed it in future beings which truly are intelligent. Meanwhile, experience lives and dies with us. But maybe that’s the point? I mean, it was called The Jimi Hendrix Experience—not the Jimi Hendrix Knowledge—for a reason."
and this: "If we really create super intelligence, it’s going to be humbling AF for anyone whose whole identity is wrapped up in being number one. If your self-worth is tied to dominance, well, brace yourself.
In other words, AGI is about to make us all… British?"
thanks for sharing as always!
love
myq