
Generative AI is evolving along two distinct tracks: on one side, savvy users are building their own retrieval-augmented generation (RAG) pipelines, personal agents, or even small language models (SLMs) tailored to their contexts and data. On the other, the majority are content with “LLMs out of the box”: Open a page, type a query, copy the output, paste it elsewhere. That divide — between builders and consumers — is shaping not only how AI is used but also whether it delivers value at all.
The difference is not just individual skill. It’s also organizational. Companies are discovering that there are two categories of AI use: the administrative (summarize a report, draft a memo, produce boilerplate code) and the strategic (deploy agentic systems to automate functions, replace SaaS applications, and transform workflows). The first is incremental. The second is disruptive. But right now, the second is mostly failing.
Why 95% of pilots fail
The Massachusetts Institute of Technology recently found that 95% of corporate GenAI pilots fail. The reason? Most organizations are avoiding “friction”: They want drop-in replacements that work seamlessly, without confronting the hard questions of data governance, integration, and control. This pattern is consistent with the Gartner Hype Cycle: an initial frenzy of expectations followed by disillusionment as the technology proves more complex, messy, and political than promised.
Why are so many projects failing? Because large language models from the big platforms are black boxes. Their training data is opaque, their biases unexplained, their outputs increasingly influenced by hidden incentives. Already, there are companies advertising “SEO for GenAI algorithms” or even “Answer Engine Optimization,” or AEO: optimizing content not for truth, but to game the invisible criteria of a model’s output. The natural endpoint is hallucinations and sponsored answers disguised as objectivity. How will you know if an LLM recommends a product because it’s correct, or because someone paid for it to be recommended?
For organizations, that lack of transparency is fatal. You cannot build mission-critical processes on systems whose reasoning is unknowable and whose answers may be monetized without disclosure.
From “out of the box” to “personal assistant”
The trajectory for savvy users is clear. They are moving from using LLMs as is toward building personal assistants: systems that know their context, remember their preferences, and integrate with their tools. That shift introduces a corporate headache known as shadow AI: employees bringing their own models and agents into the workplace, outside of IT’s control.
I argued in a recent piece, “BYOAI is a serious threat to your company,” that shadow AI is the new shadow IT. What happens when a brilliant hire insists on working with her own model, fine-tuned to her workflow? Do you ban it (and risk losing talent) or do you integrate it (and lose control)? What happens when she leaves and takes her personal agent, trained on your company’s data, with her? Who owns that knowledge?
Corporate governance was designed for shared software and centralized systems. It was not designed for employees walking around with semiautonomous digital companions trained on proprietary data.
SaaS under siege
At the same time, companies are beginning to glimpse what comes next: agents that do not just sit alongside software as a service (SaaS); they replace it. With enterprise resource planning systems, you work for the software. With agents, the software works for you.
Some companies are already testing the waters. Salesforce is reinventing itself through its Einstein 1 platform, effectively repositioning customer relationship management, or CRM, around agentic workflows. Klarna has announced it will shut down many SaaS providers and replace them with AI. Their first attempt may not succeed, but the direction is unmistakable: Agents are on a collision course with the subscription SaaS model.
The key question is whether companies will build these platforms on black boxes they cannot control, or on open, auditable systems. Because the more strategic the use case, the higher the cost of opacity.
Open source as the real answer
This is why open source matters. If your future platform is an agent that automates workflows, manages sensitive data, and substitutes for your SaaS stack, can you really afford to outsource it to a system you cannot inspect?
China provides a telling example. Despite being restricted from importing the most advanced chips, Chinese AI companies, under government pressure, have moved aggressively toward open-source models. The results are striking: They are catching up faster than many expected, precisely because the ecosystem is transparent, collaborative, and auditable. Open source has become their work-around for hardware limits, and also their engine of progress.
For Western companies, the lesson is clear. Open source is not just about philosophy. It’s about sovereignty, reliability, and trust.
The role of hybrid clouds
Of course, there is still the question of where the data lives. Are companies comfortable uploading their proprietary knowledge into someone else’s black-box cloud? For many, the answer will increasingly be no. This is where hybrid cloud architectures become essential: They allow organizations to balance scale with governance, keeping sensitive workloads in environments they control while still accessing broader compute resources when needed.
Hybrid approaches are not a panacea, but they are a pragmatic middle ground. They make it possible to experiment with agents, RAGs, and SLMs without surrendering your crown jewels to a black box.
The way forward
Generative AI is splitting in two directions. For the unsophisticated, it will remain a copy-and-paste tool: useful, incremental, but hardly transformative. For the sophisticated, it’s becoming a personal assistant. And for organizations, potentially, a full substitute for traditional software.
But if companies want to make that leap from administrative uses to strategic ones, they must abandon the fantasy that black-box LLMs will carry them there. They won’t. The future of corporate AI belongs to those who insist on transparency, auditability, and sovereignty, which means building on open-source, not proprietary, opacity.
Anything else is just renting intelligence you don’t control while your competitors are busy building agents that work for them, not for someone else’s business model.