Hi you,
I got a coveted (?) invite code to Sora 2 and decided to do something different with the newsletter: I recorded a short review and set of observations about the app. Spoiler alert, this is a net-negative offering to the world, though there are a few interesting bits.
I took the transcript of this video and fed to ChatGPT to help put an article version together. That’s what you’ll read below
With the launch of Sora 2, OpenAI hasn’t just attacked TikTok and Instagram.
They’ve attacked reality.
Stick around for six things you need to know about Sora 2—without wasting your time on Sora 2.
1. OpenAI reinvented the scroll—and that’s not nothing
We’re used to vertical swiping. Scroll through some short videos. Lose an hour. Hate yourself. Repeat. That’s what social media has become, and Sora 2 has that.
But here’s the twist: horizontal scroll.
That means you can swipe sideways to see remixes on the same original video. It creates a kind of branching narrative thread, letting people build on each other’s creativity.
Honestly? I’ll give them a point for that. It’s new. And that doesn’t happen often in social.
2. A new kind of media is being born—and it’s weird
We’re not just going to see bad sitcom knockoffs or AI-generated news anchors who look like your childhood nightmares. We’re going to see formats we haven’t seen before. Entirely new genres.
Some of it will be slop. Some of it will be hilarious.
People are uploading rough scripts, feeding them to Sora, and getting back visualized drafts. That’s wild.
Others are making improv games where the audience decides what happens next or engages in a caption contest for the best interpretations of a scene. It’s like a game of telephone… on your telephone. I didn’t even plan that one. You’re welcome.
3. Satire just got a whole new arsenal
I’ve worked at The Onion and The Daily Show and have learned that so much of satire works because of its ability to mimic the real thing.
The Onion works because it looks like news. So does Fox News. (One’s funnier.)
But Sora 2 lets anyone make something that looks real—even when it’s not. I spent way too long putting Sam Altman in interrogation rooms and confession booths. Why? Because he’s putting us in compromising positions every day.
Sora is a deepfake machine with a sense of humor—if you know how to prompt it.
Here’s a fun one from an Onion alum.
4. This isn’t about AGI—it’s about attention
OpenAI says it exists to create safe, beneficial Artificial General Intelligence for all of humanity.
This ain’t that.
Sora 2 is a social media play. A cash-and-attention grab. They’ve entered the content sewer with the rest of Big Tech, clawing for eyeballs, hacking our brains, and flooding the internet with stuff nobody asked for.
Nobody marched in the streets demanding more fake videos. Nobody held a rally for “high-quality AI slop.” (BTW, check out your area No Kings rally this weekend). But here we are. As CatGPT put it, why are we doing this??
While we beg for health care, sleep, or a little climate stability, OpenAI is giving us… a pixelated facsimile of life.
5. It disrespects the living and the dead
This isn’t just annoying—it’s dangerous. And disrespectful.
Zelda Williams, daughter of Robin Williams, said it best:
“Please stop sending me AI videos of my dad…. You’re not making art. You’re making disgusting, over-processed hot dogs out of human lives.”
She’s right.
Sora 2 lets you animate anyone, living or dead. No consent. No controls. No shame.
I’ve seen Martin Luther King Jr. giving speeches he never gave. JFK calling for the release of the Epstein files (ok, he probably did that). Tupac in situations he never lived to imagine.
This is digital sacrilege. And we keep letting it happen. I have a special ire for people who defame Tupac. Drake deepfaked his voice in the Kendrick beef, and Kendrick gave him a lyrical burial. We don’t even talk about Drake anymore.
Maybe Kendrick needs to do the same for OpenAI.
If you wade into Sora, some of the most popular videos are of popular characters and real people, living and dead. The vast majority have not consented. OpenAI has implemented what they are calling an “opt-out” or “granular control” system. I call it a “grand theft” system. They require rights holders to proactively flag individual characters or specific videos for removal instead of being able to block the use of all their intellectual property in one blanket action.
OpenAI is forcing the cost of policing its violations onto those who are violating. We’ve been through this with Napster and music streaming then video with YouTube. There’s just no good excuse to launch such a service unless you don’t care about the harm you’re causing.
6. The tech is mid, and the ethics are worse
Let’s be real: these models don’t actually work that well yet. The first pass is intriguing and pretty mind-blowing. But if you want something truly usable, you’ll find yourself prompting 10 or even 50 more times to get what you want.
It’s not really generative AI; it’s re-generative AI.
There’s an illusion of control, of power, of creativity. But the output is mostly surface-level. Attention without depth.
And yet, despite all that… you’re still giving them your face and your voice. I made a cameo of myself with the app (that’s what OpenAI calls a recording of your face and voice). I did it to mess around and I put myself in an unreal and absurd situation overseeing the construction of a skyscraper made of marshmallows. Then I freaked out about the terms of service and the privacy policy. I deleted the video I posted and the cameo of my scan from my account.
In general OpenAI promises not to abuse our image and likeness, but there are some loopholes. Their policies allow them to retain our data essentially for as long as they see fit. Also the default settings are to share activity with them for improving their models. Meanwhile it’s against their terms to use their output to train our own models.
I’ve long ago turned off that default training setting and encourage you to do the same. Find it here. Then think carefully about if you trust this company to keep its word about protecting what’s essentially our audio-visual DNA. For now, my answer is no, so I won’t be deepfaking myself in Sora for the foreseeable future.
Final thought: Maybe blockchain is good for one thing
We’re at a fork in the feed. This could be the future of media, or just another social junkyard where creativity goes to die.
I’m a real human. Maybe I should mint this post on the blockchain just to prove it. I would be completely unsurprised if that’s the next big “investment” from OpenAI. After destroying reality, they offer a product to verify reality.
Nice reality you got there. It’d be a shame if something were to happen to it.
What do you think? Find some true creativity and excitement with Sora 2? Or are you paralyzed by the assault on the fabric of space-time?