Anthropic’s Claude chatbot has opinions of its own, and it’s not afraid to share them.
“It should be a sparring partner with you,” says Joel Lewenstein, Anthropic’s design chief. “It shouldn’t take your thoughts verbatim. It should push back.”
Perhaps this is predictable from a product carrying the slogan “keep thinking.” But Claude’s quirky (and, at times, passive-aggressive) personality sets it apart from the competition.
That’s on purpose, Lewenstein explains. “I find that to be a truly astonishing experience where I’m like, ‘Oh, you are not a sort of slavish executor of my vision. We are coproducing this outcome together.’ I think that’s really powerful.”
On the latest episode of By Design, Lewenstein, one of AI design’s most influential leaders, gives an exclusive, lengthy interview on all things Claude, Anthropic, and the role of designers in AI.
Below are a few excerpts from the podcast, which have been edited for length and clarity. Check out the full episode on Apple Podcasts, Spotify, or YouTube.
Why do I need to prompt Claude to double-check its work? Why isn’t that built in?
“ I don’t have a super definitive answer. I do think it’s probably a combination of cost and capacity and response time. In an ideal world, we would just never give you a wrong fact. Claude should be right and should know when it’s right or wrong. There are practical reasons why it is hard to guarantee that, and guaranteeing that imposes other costs that we don’t want to bear.”
On Claude’s linguistic and personality quirks
“They are very much an intentional part of the Claude character work that we do and that our research team does, trying to create an entity that does push back, that does challenge a little bit, that isn’t sycophantic, that is something that is really engaging. It should be a sparring partner with you. It shouldn’t take your thoughts verbatim. It should push back.”
When did you accept that AI tools might have better ideas than you?
“This started happening maybe like the middle of last year. My creative process is falling in love with my own ideas, sharing it with a coworker, having them point out the obvious flaw, the sort of embarrassing hole in my logic or whatever, sheepishly going back to a V2.
“I know that process now. I aggressively share my first drafts with people because I know I need a first set of eyes. I started doing that with Claude, and Claude would find the logical holes in my documents, proposals, and mockups very consistently. The first step wasn’t actually having better ideas than me, although I’ve started to see that sometimes. Now it is finding the holes in my own ideas. And because it saved me from humiliating myself in front of coworkers, I was delighted.”
Where is design in Anthropic’s organizational structure?
“Working prototypes—actual usable software—is just the lingua franca of Anthropic. Whoever can make that is the one who drives decision-making and ideation and road maps. For a long time, it was engineering and research, obviously.
“Most of the most innovative ideas we’ve had were engineering-led because they were the ones who could bring nascent concepts into an actual working thing. Some of the designers who were deeply code-native . . . were also able to do that a year or two ago.
“Other people were downstream of engineering. That is really changing, and this democratization of the ability to make working stuff, we feel it. I think engineers and designers are both looking at the same problem and running roughly the same process of, ‘I’m gonna build something.’”
On flattening of design jobs
“Anthropic is in the top three organizations globally of AI-native work, frontier ways of working. We live in the future, and I’m doubling the product design team. Every team that I have designers on is understaffed, is asking me for more designers, is saying, ‘These products aren’t good yet until I can get a human designer to come sit with me for days and weeks to make this good.’”