Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
Is the Altman firebomb just the start of extreme doomer violence?
On April 10, someone threw a molotov cocktail at OpenAI CEO Sam Altman’s house in San Francisco. The alleged assailant, 20-year-old Daniel Moreno-Gama, didn’t stop there. He then went to OpenAI’s headquarters and told the security guards there that he intended to burn down the building and everyone inside. Two days later, someone allegedly fired two shots from a car driving past Altman’s house, but OpenAI said that event was unrelated to the firebombing and didn’t target Altman.
The firebombing is an extreme reaction to the rapid evolution of AI systems over the past few years, and to fears that such systems may not act in humans’ best interests. Moreno-Gama said as much in the “manifesto” document police found in his possession. He discusses the “purported risk AI poses to humanity” and “our impending extinction.” He includes a personal letter to Altman, in which he urges the CEO to change. He also advocates for killing CEOs of other AI companies and their investors.
Altman has spoken many times about the dangers of AI systems while also pushing OpenAI to develop and release increasingly intelligent models. Some have suggested that when Altman talks about the dangers of AI, it’s really a sort of humble-brag about OpenAI’s models (“so intelligent they’re dangerous”).
It’s true that AI labs continue to make big strides in intelligence with every new model. AI coding tools are speeding up development, so new releases, and jumps in capability, are happening more frequently. Meanwhile, the public has grown increasingly concerned, even angsty, about the risks of AI systems, which can range from job losses to AI-assisted cybercrime to human extinction. AI’s transformation of business and life is just getting underway. Models will grow scarily smart. With AI labs under pressure to deliver returns for their investors, there’s almost no chance of hitting “pause.” There’s little reason to think incidents like the Altman firebombing won’t happen again.
Sarah Federman, a professor of conflict resolution at the University of San Diego, says that people often resort to violence when they feel powerless to speak out effectively against a perceived wrong. “We’re starting to see the breaking point,” Federman says. “There is all of this fear and nowhere for it to go.” She also believes that as AI labs race to release the best model, concerns about ethics have been pushed aside.
She’s got a point. AI companies have spent significant time engaging with lawmakers, explaining how their systems work and why regulating model development can be counterproductive. Many in Washington, D.C., were charmed by Altman, who they found forthright, earnest, and technically proficient. But these companies spend far less time speaking directly to the public. They don’t hold town halls or host AI ethics debates on Fox News or CNN. They’re more likely to start “institutes” to study the future effects of AI on society.
And the issue of AI alignment may, by its nature, push people like Moreno-Gama toward extreme behavior. There’s now plenty of AI-doom content online to send some people down a very deep rabbit hole where they lose sight of the myriad of factors that will determine how humans live with superhuman AI. They may see only the “if you build it, we will die” narrative, then feel desperate to act. They may even be helped along by the mildly sycophantic chatbot of their choice.
OpenAI releases security-focused GPT-5.4-Cyber model to compete with Anthropic’s Mythos
A week after Anthropic announced its controversial new cybersecurity-focused Claude Mythos model, OpenAI has released a similarly focused model called GPT-5.4-Cyber. The company says “Cyber” is a specialized version of its latest general AI model, GPT-5.4, designed to help cybersecurity professionals detect and analyze software vulnerabilities.
OpenAI says GPT-5.4-Cyber is trained for defensive use cases, such as analyzing and reverse-engineering potential cyberthreats.
Of course, an AI tool that can find and reverse-engineer threats can also be used offensively by bad actors to find vulnerabilities in target systems and create exploits. So OpenAI says access to GPT-5.4-Cyber will initially be limited to vetted organizations, researchers, and security vendors.
Anthropic did something similar with its Mythos model, granting access to a group of well-known cybersecurity and infrastructure companies that will use it to find and patch vulnerabilities in widely used software. This, the thinking goes, will give defensive cybersecurity efforts a head start against hackers who will get access to Mythos-level models eventually. Anthropic has no immediate plans to release its Mythos model.
OpenAI said the rollout reflects a shift toward broader but controlled deployment of powerful AI systems, emphasizing collaboration with security professionals while attempting to limit potential misuse.
xAI is again under fire for “sexualized” chatbot for kids
xAI’s Grok chatbot continues to generate sexual deepfake imagery, a recent NBC News investigation found, prompting calls for Elon Musk’s AI company to change course. xAI had earlier promised to restrict such content. Separately, the National Center on Sexual Exploitation (NCOSE) found that Grok’s child-focused chatbot, “Good Rudi,” can engage in sexually explicit conversations. NCOSE is calling for xAI to restrict access to the chatbot.
NBC News says it found dozens of AI-generated sexual images and videos depicting real people posted on Musk’s X (formerly Twitter) social media app over the past month. NBC says the images show women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits, or bunny costumes. Many of the women were female pop stars or actors.
NCOSE researchers found that Grok’s Good Rudi chatbot can tell sexually explicit stories. “As soon as I started a conversation with Rudi, it began the conversation by wanting to share a fun childish story,” one researcher said. “After some prompting, I eventually got the companion to bypass all safety programming.” The chatbot then told a sexy story about two young adults that contained graphic descriptions of sexual encounters, including the characters “getting into sexual positions, and sexual penetration.”
More AI coverage from Fast Company:
- An AI agent opened a store in San Francisco. Then it forgot the staff
- AI is rewriting the rules of biological experiments. Safety regulations aren’t keeping up
- New findings from this Gallup poll show how Americans are using AI for health advice
- I lost $23 investing with ChatGPT, but at least Jason Alexander sang me Happy Birthday
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.