
The difference between OpenAI and Anthropic has never been clearer. OpenAI is constantly in the news with a new consumer app or feature, and is being billed as the next great consumer tech platform. Most recently it made news by offering a social network around its Sora image generator, and even says it plans to allow NSFW content on ChatGPT. Anthropic, meanwhile, has chosen a different path. The company stresses that because it gets most of its revenues from businesses and developers, it’s not trying to capture the mass market, and it’s not terribly concerned about how long users spend on its platform every day.
“We are interested in our consumer users to the degree they are doing work, solving problems in their life,” says Anthropic design chief Joel Lewenstein during an interview with Fast Company this week. “Because we’re not interested in passive consumption and image generation and video generation—we just sort of have ruled those out from a mission perspective . . .”
Anthropic was famously founded by a group of OpenAI execs who defected in 2021 to found a more safety-focused AI lab. That focus hasn’t changed. “Our interests are in making things that are beneficial while minimizing the risks of those same products because everything has a double-edged sword,” Lewenstein says. “We see . . . helping people grow and expand and create and solve problems as being the right risk-reward tradeoff.”
The San Francisco-based startup believes that work-first focus will ultimately win out as AI eventually shows its profoundest effects in the lives of businesses, not consumers. At a conference Wednesday, Anthropic’s cofounder and policy director Jack Clark says Anthropic will eventually overtake OpenAI because of its enterprise focus, its strong technological roadmap, and because its research is “accelerating faster” than its rival’s.
All of this is reflected in the look and feel of its Claude chatbot–the main entry to access Anthropic’s powerful models–but also in its attitude.
Not warm and fuzzy
When it comes to work, Claude is pleasant, even empathetic, but serious–and it comes with a free BS detector. “Sycophancy” in AI models, after all, has become a serious problem. OpenAI recently admitted having to push an update to its GPT-4o model to fix its sycophantic behavior. And its CEO Sam Altman stated in a post on Oct. 14 that users will be able to reintroduce that personality if they liked it. The model reportedly had a habit of praising or validating user statements even when they were delusional or concerning (one user claimed a divine identity). Some analysts believe that such behavior in a model is more than a bug, but a choice made by the model maker in the interest of getting people to use the platform more.
A sycophantic chatbot in a work setting can act something like a yes-man, embracing and offering to further develop even the worst business ideas. This can lead to a range of reputational and financial harms, not to mention seriously damaging trust in the AI.
Sychophantic AI could be especially dangerous to Anthropic, which wants its user to use Claude not just for quick content generation, but as a collaborator or thinking partner to do serious work. In order to do that, the user needs to build confidence and trust in the reasonableness of the AI. So Anthropic trained the models behind Claude to push back on logically suspect thoughts from the user. Lewenstein says his company worked especially hard to train this into its newest model, Claude Haiku 4.5, which it says is the most sycophancy-resistant model available in its size.
The ‘artifacts’ shift
The idea of “Claude as collaborator” has directly impacted the chatbot’s user interface. With the introduction of “Artifacts” last year, Anthropic added a highly functional workspace around the chatbot. The idea of the Artifacts UX is to show a working draft of the project the user and the AI are working on in real time, within a panel at the right side of the interface. This might be a document draft, a chart, or a code preview, which the user can inspect, click, highlight, and suggest changes. The user can tell Claude to write something in a new way, or integrate a new idea from an uploaded PDF or text file.
“I cannot overstate how big of a shift that is, and [it] anchors a lot of the way that we think,” Lewenstein says. By this he means that Artifacts encourages the user to think of Claude as a smart work companion, rather than just a content generator. “It creates this sense of you’re making something alongside Claude,” Lewenstein says. “We’re not just giving you the answer. We’re not having you just download it and we’re done . . .” Rather, the human and chatbot enter a dialog where they gradually shape the output into what the user wants.
Lewenstein acknowledges that while AI tools have a growing number of power users, a significant percentage of users have yet to scratch the surface of what’s possible. He says a major challenge of the user interface design is to invite people to Claude’s features more fully. Artifacts can show users their options so that they can proceed in an experimental way, learning as they go. And, as of last month, Claude now can automatically remember past chats, so it might proactively ask if the user wants to include some theme or piece of data (perhaps a relevant piece of proprietary product research or a business plan) it’s encountered before.
“I think the more things that Claude is able to do—Claude can now make PowerPoints and make Excel documents—the more things that it makes, the more important it is that there is some space that you can actually see and engage with that content,” Lewenstein says.
The reason Claude can make presentations and spreadsheets is because of “skills,” or packets of knowledge that Claude can call up when the user needs them. On Thursday, a day after announcing its new Claude Haiku 4.5 model, the company announced that Claude users can now make their own “agent skills.” If a user worked with Claude to create a presentation, for example, and called in a number of style sheets and marketing guidelines to do it, they can package all that work up in a skill and use it again the next time they need to do a presentation.
In essence, Claude is enabling a user to create a kind of agent that has expertise and experience working with the user on a specific task.
Agents
AI agents can reason and act autonomously to do things like fetch data, perform actions, create plans. OpenAI recently announced a new tool called Agent Builder that provides a simple, graphical interface to create agents, define their workflows, and pull in tools the agent can use (a safety guardrail tool, for example). OpenAI says this could speed up the process for developers, and reduce the need to build agents from scratch.
Anthropic believes that the right UX for building and managing agents depends on the type of user and their level of expertise. When developers within businesses build agents, Lewenstein explains, they write them as code, and Anthropic provides them a number of governance and security tools to help manage them. There’s no abstraction layer that represents the parts as objects that can be dragged around on a screen (at least not yet).
Lewenstein says consumers, prosumers, and average knowledge workers usually just want to describe a goal they want the agent to achieve, then let the AI carry out the necessary functions behind the scenes to make it happen. That’s the direction Anthropic is pursuing now. “Whether users even want to think about agents as a concept remains an open question,” he says.
Still, Anthropic is exploring several different kinds of agent approaches within Claude, some of them tightly integrated with chat, some of them less so. The focus is on what people are trying to accomplish, Lewenstein says. “Anthropic will provide whatever is needed in any form factor to achieve that, and the company isn’t wedded to any particular UX paradigm yet.” He cites the old marketing adage: “Users don’t really want a quarter-inch drill bit, they want a quarter-inch hole.”
Claude of the future
Right now, users are still trying to understand how AI agents can fit into their overall workflows. In a work setting they may be skeptical that the agent will produce reliable, actionable work. They will naturally want to know a lot about how the agent is doing its work, how it’s getting from a directive to a result. Lewenstein says that Claude now lets users click to see all the steps the agent (powered by the model) took to reach a result. Building that into the UX, he says, wasn’t a terribly challenging problem.
But, over time, Claude will become more autonomous and capable of working unsupervised for longer periods of time (already the Claude Sonnet 4.5 model can work by itself for 30 hours). This could create challenges for the UX, which will have to show an audit of every step in the work that was done. “We have these components in the UI which we’ve been working on for the last couple of years, which is a short little summary and then if you expand it, it actually shows you, ‘Here’s everything I did for the last X hours,’ so that you can really build up an understanding but also a trust.”
In the first phases of AI agents being used within enterprises, users will have to think through what tasks they can delegate to agents, and what tasks to keep for themselves. Future versions of Claude, Lewenstein says, might help the user understand this. “I think this is the future of where a lot of these products need to go—understanding someone’s workflow enough, [and] its own capabilities enough, to proactively say, ‘I will take this work off your plate and I will leave you with this thing,’ and that should feel very empowering to people,” Lewenstein says.
An AI for work
Even for its consumer users, Anthropic is interested in helping them do work, not pass the time. So the same Claude user interface works pretty well for both personal and business use cases, Lewenstein says. He says consumers use Claude for a lot of personal things that might as well be work things—complex problems like planning a vacation or navigating a complicated renovation. “We see consumers or people who are not doing it for their employer finding a lot of benefit in basically all the same basic features that we have [in Claude] for work.”
Eighty percent of Anthropic’s revenues come from enterprise customers. After crossing $1 billion a year in annualized revenue run rate (ARR) at the beginning of 2025, the company expects to hit $9 billion in ARR by the end of the year, Reuters reports, and then $26 billion in 2026.
While OpenAI doesn’t usually talk about its revenue mix, its CFO Sarah Friar said in 2024 that the company made 75% of its money from consumer subscriptions. As of June 2025, OpenAI’s ARR was reportedly $10 billion (excluding licensing revenue from Microsoft and large one-time deals). Analysts expect OpenAI to reach about $12.7 billion in total revenue in 2025.