
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on the role of NSFW material on AI platforms, which could be complicated when AI platforms turn into social platforms. I also look at a powerful new Anthropic model for free Claude chatbot users.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.
Sam Altman welcomes NSFW to AI
Sam Altman casually said on X Tuesday that OpenAI is planning to introduce NSFW content on ChatGPT as soon as December. The comment, which came at the bottom of a discussion about user mental health, raises all kinds of questions about user safety and trust, and about what audiences OpenAI really wants to serve.
Altman says the company hopes to implement a new age–gating mechanism through which users will prove they’re old enough to consume adult content. On ChatGPT, that implies frank discussions about sex with the chatbot, or maybe some forms of entertainment such as role-playing with sexy AI companions.
Elon Musk’s xAI has already gone well down that road with its AI Companions, which launched during the summer within the Grok chatbot. The companions, reserved for Premium subscribers on the Grok app, have a “NSFW” mode and are willing and ready to engage in sexy conversation.
The way Altman frames NSFW AI sounds similar to Musk’s approach to appropriate content. “As part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” he wrote on X. So it’s not hard to imagine ChatGPT going down some of the roads xAI has taken.
To some extent, this may apply to image generators too. Musk has already gone there. In August, xAI released the image generator Grok Imagine, which reportedly has a “spicy mode” that lets users create sexually explicit content, including partial female nudity, via text prompts. Will OpenAI’s new permissive attitude about adult content on ChatGPT extend to its other products as well?
The company’s second-hottest product is the new Sora 2 image generator. The difference between Sora 2 and Grok Imagine is that Sora is a social app. Using the Sora 2 app is a lot like using TikTok, only all the content that’s viewed, shared, and created on it is AI-generated, not shot with cameras. That social aspect raises the stakes in the appropriateness question.
Right now OpenAI is tightly controlling the content created on Sora 2 (currently invite only). No sexual content is allowed, and the company says it puts an even tighter filter on Sora generations that will be shared socially. The company is using both content moderation AI and human reviewers to detect material that might violate its guidelines. It provides a way for users to report offensive videos and uses an AI algorithm to detect accounts bearing the hallmarks one would associate with being owned by a minor.
But the company also says it’s taking an iterative approach to its content moderation, so today’s tight moderation standards could loosen in the future. This could be especially problematic when it comes to the image and likeness rights of Sora users. One of the main features of the Sora app is “cameos” where users can feature their own likeness, or their friends,’ or certain celebrities, in their video creations. Allowing NSFW content in this context could open up all kinds of safety and reliability problems for users, and for OpenAI.
Altman was surprised by the splash his “erotica” post made on Tuesday. On Wednesday he tried to explain further in another tweet:
“As AI becomes more important in people’s lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission . . . Without being paternalistic we will attempt to help users achieve their long-term goals. But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
And, safety concerns aside, the “adult” side of life has always been well represented on technology platforms from VHS to VR to social media. OpenAI’s acceptance of adult content isn’t likely to make ChatGPT any dumber or less useful, but it may give millions more people another reason to start using the chatbot.
Anthropic brings a gift to free Claude chatbot users with new Claude Haiku 4.5 model
Anthropic announced its new Claude Haiku 4.5 model Wednesday, which will become the default model for all free Claude.ai users. The model may be the most powerful model currently available to free users of chatbot apps.
The arrival of Haiku 4.5 just two weeks after Claude Sonnet 4.5 suggests that things are still moving quickly on the research front. The new Haiku model matches Anthropic’s previous flagship Sonnet 4 model in software coding and even exceeds it in computer use tasks. “Five months ago, Claude Sonnet 4 was a state-of-the-art model,” Anthropic says in a blog post. “Today, Claude Haiku 4.5 gives you similar levels of coding performance but at one-third the cost and more than twice the speed.” It also makes applications like Claude for Chrome run faster.
For business users, Haiku 4.5 can be used to power multi-agent workflows where multiple instances of the model work in parallel or collaborate with larger models. For example, Sonnet 4.5 (currently considered Anthropic’s best model for AI agents) can plan complex projects while several Haiku 4.5 subagents quickly complete individual tasks. The model’s speed and cost efficiency make it particularly well-suited for real-time applications including chatbots, customer service, financial analysis, and research, Anthropic says.
What does “human-centered” AI really mean?
I first came to know of the Stanford Institute for Human-Centered Artificial Intelligence around 2020. It’s an interdisciplinary research institute focused on developing and guiding AI in ways that prioritize human values, ethics, and societal benefit. I had a vague understanding of the concept in 2020, but as AI advanced, post-ChatGPT, I realized that it may be among the most important themes of the 21st century.
There are active and passive ways to use AI. You can ask AI to do your work for you, to create a final product. Or you can work with the AI, using it to pull knowledge and inspiration out of yourself. So many of us struggle to stay in that productive thinking space long enough to pull out good ideas. “Thinking through” something is difficult and requires concentration. Even good ideas that seem to pop up out of nowhere need to be carefully examined to find logical pitfalls. One AI researcher told me that he uses AI as a kind of thinking partner to help him stay engaged in that deep thinking space by providing thoughtful, sometimes critical, feedback.
This example is a simple expression of human-centered AI, in which the AI is used as an enabler, not as a proxy for human creation. The problem is this: AI may advance to the point where it is good enough to reason through hard problems and create a good expression of a solution (or an important insight) at the end. In more and more use cases, the AI may be good enough to allow the human to relax while the computer does the work. And the tech industry has no qualms about offering us conveniences (like app-based food delivery) or entertainment (Netflix) that lets us disconnect our brains. But the tech industry has never sold anything as capable as AI. The more use cases in which the AI does the work, the more that human beings are sidelined. We may end up relaxing ourselves into irrelevance and then extinction.
More AI coverage from Fast Company:
- The memeification of Sora 2
- Are large language models the problem, not the solution?
- Exclusive: Big Philanthropy teams up to take on Big AI
- Overheating at night? An AI-enabled mattress cover could be the answer
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.