
As OpenAI rolls out its new social media app Sora—which allows users to prompt the company’s Sora 2 model to produce fantastical videos of almost anything—there are obvious concerns that the platform could be used to generate deepfakes and otherwise misleading content.
To combat this problem, the company says it adjusted its systems to prevent users from manipulating images of other people, including political leaders like Donald Trump, Kamala Harris, or Emmanuel Macron. If you try to generate an image of a public figure, the Sora app—which is still invitation-only—will generally tell you your prompt violates the platform’s guidelines.
But OpenAI is also using a more analog method of preventing celebrity impersonation: blocking users from even signing up for the platform with certain usernames.
The company appears to have blocked users from signing up with usernames that reference major political figures and other celebrities, including Trump, Katy Perry, Benjamin Netanyahu, and Kim Jong Un. While some account names are flagged as already taken, these usernames trigger a specific notice: “This username is not allowed.” The company did not directly answer Fast Company’s questions about how it determines which public figure usernames should be blocked.
OpenAI is already selling its ChatGPT technology to U.S. federal agencies, but the company wouldn’t say much about whether it might eventually welcome government officials, or the government more broadly, to the Sora app.
“We don’t have anything else to share right now on future plans,” an OpenAI spokesperson told Fast Company. “Public figures can’t be generated in Sora unless they’ve uploaded a cameo themselves and given consent for it to be used. Whether you’re a public figure or not, Cameo puts you in control of your likeness, with options to decide who can use it and how.” (Cameo is the Sora feature that allows you to upload recordings of yourself to the app and create a highly realistic avatar, and then use that likeness in a variety of AI-generated scenarios.)
OpenAI is relatively new to the social media business, but the battle over username ownership is nothing new. Facebook, TikTok, and Twitter have long dealt with the challenge of social media users claiming to be celebrities online, as well as the question of how to grant coveted handles. Control over accounts that appear to belong to government officials is particularly sensitive, and social media companies often tout steps they take to prevent their platforms from misuse during campaign season.
But the challenge becomes far more complicated with generative artificial intelligence and generated AI videos, which are premised on inviting people to create doctored content.
While President Trump doesn’t seem to have an active Sora account right now, he is a devoted social media poster with a growing penchant for AI-generated video memes that mock his political opponents. Sora’s technology has also gotten significantly better, which means it’s far more likely that people might get duped—and that they might need to rely on a username to verify the source of a particular piece of generated content.
“The levels of realism and the number of visible artifacts have both been improved over the previous version and other state-of-the-art video generation apps,” Siwei Lyu, a computer science professor at the University of Buffalo, whose team studied the latest Sora model, told Fast Company. Despite visible watermarks on generated videos and other “invisible” watermarks deployed by the company, “to ordinary viewers the generated videos are very challenging to tell apart from real ones,” Lyu said. It’s still possible for people to circumvent or manipulate the technology, he warned, noting that he wasn’t sure how OpenAI developed the list of people whose likenesses can’t be generated on the app.
OpenAI has released general usage policies on what people aren’t allowed to do with its models. That includes depicting real people without their consent and producing content that’s designed to “mislead” others. But while the “username not allowed” message seems to imply that OpenAI wants to specifically limit the ability of people to represent themselves as public figures, it’s not clear how exhaustive that policy actually is or who it’s designed to cover.
For instance, the username JD Vance is already deployed. And there’s a barely followed account that represents itself as Education Secretary Linda McMahon, with her face as the profile picture, as well as one for Defense Secretary Pete Hegseth, also with a profile picture. Neither has verified check marks, which some influencers on the app now display. It’s theoretically possible these accounts actually belong to those individuals, but unlikely.
Right now, the Sora app is only available to users in North America, but the names of some public figures outside the U.S. and Canada seem to have been proactively protected by the company. The name Sara Duterte, the name of the Philippines’ current vice president, produces a “not allowed” notice, as does Indian Prime Minister Narendra Modi, Palestinian politician Mahmoud Abbas, Chinese President Xi Jinping, Pakistani Prime Minister Shehbaz Sharif, former U.K. Prime Minister Tony Blair, and Netanyahu. The names Maha Vajiralongkorn, the king of Thailand, and Anutin Charnvirakul, the country’s prime minister, were both blocked. William Ruto, the name of the president of Kenya, is also blocked.
But not every head of state is automatically protected from any common user using their name. Fast Company was able to successfully edit a Sora account username to the names of leaders of Guyana, Niger, and Angola: Irfaan Ali, Abdourahamane Tchiani, and João Lourenço, respectively. An account has already taken the name Ibrahim Traoré, the interim president of Burkina Faso. The username for Prabowo Subianto, the name of the president of Indonesia, has also been nabbed. The name of Peter Pellegrini—the leader of Slovakia, a nation that saw a deepfake video of a candidate spur confusion during an election just last year—is now being used, too.
A former State Department official told Fast Company that, on its own, blocking certain usernames is “not even a barely acceptable minimum.” For now, the username function doesn’t seem to recognize at least some Cyrillic characters, but it’s possible that someone could try to take advantage of those to make it appear like they already have blocked usernames, the person added.
As for leaders in some non-Western countries not having their names reserved, the person said: “These companies never care about the Global South until someone gets hurt.”