Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
A look at the AI landscape for small businesses
So much of the conversation around the great AI transformation of business has centered on enterprises, meaning companies with more than 500 employees. That makes sense: For AI and cloud companies, landing a large enterprise customer can mean securing a significant stream of recurring revenue.
But if we’re really talking about AI reinventing work and making everyone more productive, small and medium-sized businesses should be a much bigger part of the conversation. According to the Small Business Administration, around 36 million small businesses operate in the U.S., employing 46% of private-sector workers. Most of those companies are very small. Federal data shows that about 88% have fewer than 20 employees.
Universities and consultancies have, of course, studied how and to what extent small businesses are using AI tools. Research from 2024 formed a consensus on the idea that relatively few small businesses had meaningfully begun adopting them. But surveys conducted in 2026 paint a more complicated picture. A recent Goldman Sachs study of 10,000 small businesses found that three-quarters are now using AI, with 84% citing productivity and efficiency gains. Still, only 14% said they had integrated AI into their core operations. Another study, from the National Federation of Independent Business (NFIB), found that only a quarter of small businesses reported using AI tools at all. (NFIB typically surveys very small traditional businesses like plumbers and caterers while Goldman may capture more digitally engaged firms, like e-commerce retailers).
Many small business owners are probably aware of the growing ecosystem of AI products designed for smaller operations. Intuit, Zapier, HubSpot, Lindy, and Microsoft all compete in this space. Many software companies that have long served small businesses, such as Intuit, have gradually folded AI copilots and automations into products customers already know well—products like accounting platforms, CRM systems, office suites, customer support software, and workflow automation tools. Microsoft did exactly that when it integrated Copilot into its productivity suite. Google, meanwhile, is weaving its Gemini model into its Google Workspace suite.
And the big AI labs are increasingly targeting smaller businesses. OpenAI offers ChatGPT for Business/Teams, which can help draft marketing copy and analyze spreadsheets. It also offers a set of “skills,” which it defines as “reusable, shareable workflows” that bundle instructions, examples, and code. Anthropic went a step further this week, launching a package of AI workflows, skills, and integrations built specifically to manage business functions common to small businesses. The product is called Claude for Small Business.
In its go-to-market effort Anthropic thinks in two ways about barriers to AI adoption by small and medium-sized businesses. “What our research shows is that around 32% of SMB employees don’t really know how or when to use AI,” Anthropic’s small business go-to-market lead Lina Ochman tells me. They feel blocked because they just don’t have enough experience with AI in general, certainly not far beyond basic chatbots.
“And then 64% tell us they want to move beyond the chat and … actually have agents that help them run their workflows,” Ochman says. But even when they get some experience with AI agents that can reason and handle more complex tasks, they aren’t sure how to apply them to their own businesses. That’s exactly why Anthropic took a sort of plug-and-play approach to its small business product. How well the company’s set of pre-baked workflows can be adapted and customized for unique business functions is yet to be seen.
The alternative approach—custom building and managing highly customized AI tools—could be daunting for many small business owners. For example, an Austin-based vegan cheese-maker called Rebel Cheese went deep into that world to solve a problem costing the company $50,000 a month in excess shipping charges. Rebel Cheese used Anthropic’s Claude to investigate the issue and map out a solution, then turned to the agentic orchestration tool Manus to build a system that automatically disputes suspected carrier overcharges. But the company’s cofounder, Kirsten Maitland, says the process took months, requiring her to test multiple AI agents and spend long nights developing and refining the system.
Over time, it’s likely we’ll see small business AI tools from Anthropic and OpenAI evolve to make more specialized and customized builds far less demanding. For now, though, most small businesses will continue using AI in less sophisticated ways than their larger counterparts. Still, the Rebel Cheese case hints at what becomes possible when a small business gains access to the same tools as the biggest players.
AI models’ reasoning on ethical dilemmas may be just performative, says a new study
Leading AI models often give the appearance of deliberating over moral complexities without actually doing so, according to a new paper published in the journal AI and Ethics by researchers at Harvard Kennedy School’s Allen Lab. Rather than actually reasoning their way to a nuanced answer to tough questions, they appear to just default to a hidden “value hierarchy” that’s already been trained into them, the researchers say.
The study is titled “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models Be Trusted?” It tested four models—Claude, GPT, Llama, and DeepSeek—on ethical dilemmas drawn from moral psychology, including scenarios where both available options carry genuine moral costs. In 87% of so-called tragic tradeoff trials, all four models converged on the same choices, and the choices often didn’t follow from their reasoning.
The researchers describe the AI behavior as “shedding crocodile tears,” performing moral anguish while executing what they characterize as an implicit, opaque value hierarchy. That could raise some real trust issues with users. “People are increasingly turning to these tools for guidance on hard decisions,” says the lead author, Sarah Hubbard, in a statement. “If a model appears to grapple with an ethical dilemma while actually reducing it to a predetermined answer, it may be earning users’ trust under false pretenses.”
Are AI benchmarks functionally useless?
In the world of AI research, the most common way to measure the intelligence of a model is by submitting it to a benchmark test. Hundreds of the tests exist, each focusing on a different aspect of intelligence. One might focus on writing code while another might focus on instruction-following or reasoning.
But there’s a big problem. AI labs can game the benchmarks. “As soon as the first training runs after [a] benchmark has been released I think it stops being a good measure of intelligence because suddenly the models have been trained on it, and it happens to all of them,” the former OpenAI researcher Jerry Tworek said during a recent podcast appearance.
Sample test questions and answers quickly appear online. AI labs can train their models on that data to score better on the tests. “People will target it in training, they will solve it for any benchmark,” Tworek said. Then the researchers can write an algorithm that tells the model how to answer the test questions.
Tworek, who was one of the main brains behind OpenAI’s breakthrough o1 and o3 reasoning models, says that in order for a benchmark to be meaningful, it has to have a way to generate new questions or scenarios for every new test, so that the model being tested has never seen them before.
That was the main idea behind the recently released ARC-AGI-3 benchmark from the influential researcher François Chollet. That benchmark generates and presents novel gaming environments to an AI agent, then challenges it to figure out the point of the game and how to win. This forces the agent to draw on past experience and make judgments about how to apply it in new situations that it’s not been trained on.
More AI coverage from Fast Company:
- You can put a data center at your house—but who really pays?
- This tiny Maine town used AI to make a new logo. Its residents had other ideas
- ServiceNow CEO Bill McDermott: Silicon Valley is getting enterprise AI wrong
- The Demi Moore-AI debate is missing the point
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.