
Whether it’s Sam Altman surreptitiously stealing GPUs from a Target, trying to make a break for the door under the gaze of security cameras as he tucks a box containing a valuable computer chip under his arm, or Super Mario appearing in Star Wars, the rupture in reality brought about by OpenAI’s AI-generated video social network, Sora, is significant. What previously would have been decried as deepfaked videos have gone viral on social media in the last two days, while also outstripping the release of Meta’s competing product, Vibes.
Users, including some OpenAI employees on social media, have been revelling in their ability to create outlandish content involving real life characters — a consequence of unusually lax rules set out by OpenAI, the company behind Sora. That’s despite the AI giant purporting to have some rules designed to prevent IP infringement.
Social networks, which were once designed to connect us with one another and have since been subsumed by AI slop, are now looking like they’re going the way of the dodo. In their place is a boomer Facebook user’s paradise: A steady scroll of the unreal and outlandish, and not a single human involved. That has experts worried about our ability to distinguish fact from fiction, and how it can tamper with our temperaments.
“It isn’t entirely surprising that businesses are effectively following the money as to what we’ve seen over the last 12 to 18 months, particularly in terms of AI generated video content,” says deepfake expert Henry Ajder. Some of the most viewed videos on platforms like YouTube Shorts, traditionally home to human-only content, are now AI-generated.
“The fact that these companies are recognizing those opportunities isn’t surprising to me,” he says. Those who are slightly online, not to mention the extremely online, are similarly unsurprised.
We’re not ready
Nevertheless, the impact of AI-filled feeds on our perception of content is significant, says Jessica Maddox, associate professor of media studies at the University of Georgia.
“The danger in sharing and enjoying AI images, even when people know they’re not real, is that people will now have to chase more fictional, manipulated media to get that feeling,” she says. And with the apps in question explicitly saying there are few, if any, guardrails around copyrighted content, and limited ones around the type of content that can be created, there are real risks of polluting our pools of content for years to come.
Some suggest that we’re ill-equipped to deal with the problem—in part because what we consider ‘real’ images haven’t been real for a while, thanks to the volume of pre-processing that takes place in the millisecond between clicking the shutter on your smartphone and the image being saved in your camera roll.
A recent preprint study by Janis Keuper, a researcher at Offenburg University and his wife, Margret Keuper, a researcher at the Max Planck Institute for Informatics and the University of Mannheim, suggests that the gap between the quality of images used to train deepfake detectors and the average smartphone snap is now so significant as to make any detection tools useless. Detection tools are trained on ground truth images that are as similar to today’s photographs as early 20th century cameras were.
“It’s going to be really hard to filter out AI content,” says Janis Keuper. “It’s really hard in in text. It’s really hard in images as the generators become better and better. And well, we’ve been looking at AI generated images for a while now,” he says.
The secret of slop
However, what is different with the advent of Vibes and Sora is that they explicitly say they want AI content first—and usually foremost. “Meta Vibes is perfectly named for the problem of AI slop,” says Maddox.
In a world where the world itself is our imagination, it doesn’t matter if the actual image represents anything close to reality. It’s akin to ‘alternative facts’. No matter how outlandish the video, it’s legitimate. All it has to do is reinforce our viewpoint—the visual equivalent of the post-truth era brought about by Donald Trump.
That bleeds through to the common perception of how people often react to AI-generated content, says Maddox.
“People will say, ‘But I agree with what it’s trying to say, whether it’s real or not,’” she says. And that’s proof positive of what’s going on. “AI is vibes only,” she says. “Unfortunately, that means something like Meta Vibes is likely to be incredibly successful with Meta’s audience that seems to love AI imagery. It won’t matter, because with AI, feelings reign supreme.”
Where the authentic and the synthetic collapse
And that’s what worries the experts the most. The apps are being foisted on users, but may well succeed—in part because we’re already inured to the persuasive power of content to move us.
“Reality is one now where authentic and synthetic collapses, right?” says Ajder. “People have authentic experiences—that is, experiences that move them, change their beliefs, change their relationships, change their opinions—with AI. They’re influenced with virtual companions, via chatbots, and with AI-generated disinformation content around war zones and conflicts.”
Ajder doesn’t believe Meta and OpenAI are thinking about the emotional response to AI. “The idea is less passionate,” he says. “It’s more market driven. These kinds of videos are cheap to make. They’re quick to make. We can scale them easily, and they get engagement, they get views, they get clicks.” (Never mind the cost for the environment.)
But as well as getting rid of the “social” from social media, the second- and third-order ramifications of driving an AI-powered attention economy could have more significant consequences than keeping us scrolling.