What began as a race to build better AI models has escalated into a competition for compute, talent, and control. Foundation models—large-scale systems trained on vast datasets to generate text, images, code, and decisions—now underpin everything from enterprise software and cloud infrastructure to national digital strategies.
The industry’s language around AI has grown more ambitious—and more elastic. Agentic AI has leapt from research papers to Davos billboards, while artificial general intelligence, or AGI, now appears routinely in investor decks and earnings calls. Definitions have begun to blur. Some companies quietly lower the bar for what qualifies as general, stretching the term to encompass incremental productivity gains.
Yet the economic results, particularly measurable returns on AI investment, remain uneven. According to PwC’s 2026 Global CEO Survey, 56% of 4,454 CEOs across 95 countries reported neither increased revenue nor reduced costs from AI over the past 12 months. Only 12% achieved both. Even so, 51% plan to continue investing, despite declining confidence in revenue growth. The result is a widening gap between engineering reality, commercial storytelling, and public expectation.
Few voices carry as much authority—or have shaped modern AI as directly—as Andrew Ng. The founder of DeepLearning.AI and Coursera, executive chairman of Landing AI, and founding lead of the Google Brain team, Ng has helped define nearly every major phase of the field, from early deep-learning breakthroughs to the current wave of enterprise deployment. He has authored or coauthored more than 200 papers and previously led the Stanford AI Lab. In 2024, he popularized the term agentic AI, arguing that multistep, tool-using systems capable of executing workflows may deliver more near-term economic value than simply scaling larger models.
In an exclusive conversation, Ng offered Fast Company a reality check. He says true AGI—that is, AI capable of performing the full breadth of human intellectual tasks—remains decades away. The true competitive frontier, meanwhile, lies elsewhere.
This conversation has been edited for length and clarity.
You helped popularize the term agentic AI to describe a spectrum of autonomy in AI systems. How did you come up with it, and how has the concept evolved as multi-agent systems move into enterprise production?
I began using the term almost two and a half years ago, though I didn’t publicly take credit for it at the time. I started using it because I felt the community needed language that shifted the focus toward AI systems capable of taking multiple steps of reasoning and action—not just a single prompt-and-response exchange. More specifically, I felt there would be a spectrum of AI systems—some slightly autonomous or slightly agentic, and others highly agentic—where they take many steps of actions and work for a long time.
No one was using the term agentic to describe this concept before I began using it. I started introducing it in my newsletter and in talks at conferences and industry events, and it quickly gained traction there. I didn’t expect marketers to run with it the way they did.
When I attended Davos this year, I saw the word plastered on the sides of buildings. Even outside San Francisco, agentic now appears on billboards. I did want to intentionally promote the use of the term, but seeing how common it has become, I sometimes wonder if I overdid it.
Enterprise adoption of agentic AI is accelerating, yet many organizations are struggling with integration, governance, and measurable ROI. Why is it so?
Two years ago, there was intense hype around AI’s risks and dangers, among other concerns. Last year, businesses began shifting their focus toward real-world implementation. This year, the conversation has moved firmly to ROI. Even though many companies are not yet seeing strong returns, they continue to invest because they understand that AI will eventually deliver value. The discussion has shifted from excitement about what AI might do to a more grounded focus on how it can generate real economic impact.
There’s also an interesting split-screen dynamic emerging. On one hand, many businesses say agentic AI is not yet delivering meaningful ROI, and they’re right. At the same time, teams building agentic workflows are seeing rapid growth and real, valuable implementations. The agentic movement still has very low penetration, but it is compounding quickly.
What are the most significant mistakes enterprises make when deploying agentic systems at scale, and how should leaders rethink their technology and operating models to overcome them?
Many businesses are pursuing bottom-up innovation, which is valuable, but the limitation is that it often leads to point solutions that deliver incremental efficiency gains rather than transformative change. If AI automates just one step in a process, for example, it might save an hour of human work and reduce costs. That’s useful and worth doing, but it doesn’t fundamentally change the business. Much of today’s AI deployment falls into this category—incremental improvement rather than full transformation. To unlock real value, companies need to look beyond optimizing individual tasks and start reimagining entire workflows.
Doing so requires top-down leadership. Often no single person working on one step has the authority to reshape the entire process, which is why executive-level direction becomes essential. Real impact comes from tailoring AI strategy to each organization’s specific context rather than following generic industry playbooks.
There is a growing debate about whether we are in the midst of an AI bubble or simply an early infrastructure build-out comparable to the internet era. How do you distinguish between speculative hype and genuinely durable AI value being created today?
At the application layer, I don’t think we’re in an AI bubble. AI is expanding rapidly across business use cases—how we process legal and technical documents, manage customer success workflows, conduct research, and much more. I would like to see more investment in AI applications and inference infrastructure. Right now, there simply isn’t enough inference capacity, and worries around rate limits exist.
The more interesting question about a potential bubble sits in the model training layer, where infrastructure spending continues to surge. If any risk exists, it’s highest there because the largest investments are concentrated among a small number of players. When companies build highly specialized hardware that can only be reused for inference with some inefficiency, the risk of overbuilding increases. I don’t think we’re overbuilding right now, but if any part of the AI market faces that possibility, it’s the training layer.
As the industry moves beyond a single-model mindset toward more diverse agentic systems, how should enterprises think about AI architecture? Is there likely to be one dominant framework for building scalable, real-world AI systems—or will organizations need a more flexible approach?
Software can range from five lines of code to massive systems that run for years. Because of that range, there won’t be a one-size-fits-all approach to building or governing these systems. Just as we don’t use a single framework to manage everything from simple scripts to enterprise platforms, we won’t rely on one architecture for agentic AI. Human work itself is incredibly diverse—from basic tasks like spell-checking to analyzing complex financial documents. Since the work varies so much, the AI systems we build will also need to vary.
One principle my teams follow when building agentic AI systems is speed, as continuous improvement is essential. Our typical cycle involves building carefully to avoid major risks, testing with users, gathering feedback, and refining the system until it truly works well. That rapid loop is what helps teams build reliable, high-performing systems faster.
Agentic AI is rapidly increasing systems’ ability to reason and act with limited human intervention. Does the rise of agentic architectures meaningfully accelerate the path toward AGI, or are we still far from true general intelligence?
Most of the public thinks of AGI as AI that is as intelligent as people, and one useful definition is AI that can perform any intellectual task a human can. You and I could learn to fly an airplane with maybe 20 hours of training, learn to drive a truck through a forest, or spend a few years writing a PhD thesis. Most humans can do these things. We’re still very far from AI meeting that definition of AGI.
For alternative definitions that some businesses have put forward—definitions that dramatically lower the bar—you could argue we already achieved AGI. There’s a good chance that under these lower-bar definitions, some businesses will soon try to declare success. But that won’t mean AI has reached human-level intelligence—it will simply mean the definition has been reworked to fit a much lower threshold.
Maybe a year ago, AGI felt 50 years away. Over the past year, perhaps we’ve made a solid 2% of progress, with another 49 years to go. These numbers are metaphorical, so don’t take them too seriously. [Laughs] But we are closer than before, yet many decades away from an AI that matches human intelligence. If you stick with the original definition—aligned with what people genuinely imagine AGI to be—we remain very, very far away.
Is geopolitical fragmentation reshaping global AI strategy for both governments and enterprises?
One of the other big themes I’m seeing is sovereign AI. The world is becoming more fragmented, and there’s a lot of discussion about how nation-states want to make sure they have access to AI without needing to rely on other nations or any single company that they may not fully trust or be able to rely on in the long term. Governments and regions are thinking carefully about how to build and maintain their own AI capabilities so they can remain competitive and secure.
As AI becomes more central to economic growth and national security, this question of who controls the infrastructure and models becomes much more important. So alongside enterprise adoption, there’s also a growing geopolitical dimension to AI deployment.
In 2026, as enterprises search for real economic returns from AI, what leadership decisions and workforce shifts will ultimately determine whether organizations capture meaningful value from agentic systems?
Leadership matters. When I work with CEOs, I see decisive moments when the C-suite must think strategically about what to invest in and then place those bets thoughtfully, guided by a clear understanding of what the technology can and cannot do—not just the surrounding hype. In periods of transformation, leadership decisions determine whether an organization captures real value from AI or merely experiments at the margins.
I often speak with CEOs before they set a major strategic direction. No one knows exactly where AI will be in a few years, so we are operating in a kind of fog of war. But uncertainty does not mean we don’t know anything. Teams and partners who understand the technology well can narrow that uncertainty significantly and make far more informed decisions.
At the same time, everyone should learn to code—or at least learn to build software with AI. AI has lowered the barrier to creating custom tools. Today my marketers, recruiters, HR professionals, and financial analysts who use AI to write code are already more productive than those who do not. When I hire, I increasingly prefer people who know how to build with AI assistance. I may have been early on this shift, but I now see more startups and established companies moving in the same direction.
Just as it became unthinkable to hire someone who could not search the web or use email, I am already at the point where I hesitate to hire knowledge workers who cannot use AI to build or automate with code.