
Amara’s Law, coined by the American scientist and futurist Roy Amara, says humans “tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” If the first half of 2025 is anything to go by, in the AI era, the “runs” are getting shorter, and the effects of the technology will be larger than we’ve seen in a generation.
In a matter of months, the conversation in companies has accelerated far beyond if AI is a useful productivity tool, to where and when it can be applied. Across industries and geographies, executives are acknowledging that AI is a general-purpose business solution, not just a technical one.
Despite widespread workplace adoption, the focus on cybersecurity has not kept pace. In the rush to adopt AI systems, applications and agents, companies are failing to consider that rapid deployment of these new technologies could lead to data breaches and other security risks.
That matters because AI models are not only getting more powerful but also more useful for enterprises.
More enterprises are using AI agents
As of early June, OpenAI’s base of “paying business users” reached 3 million, up from 2 million in February. In a move for that market, ChatGPT can now connect to popular business apps such as Google Drive, Dropbox, and Sharepoint, allowing workers to quickly access answers that are locked in dispersed documents and spreadsheets.
Confusion, and even fear, about AI agents has given way to exploration and adoption. Among US-based organizations with annual revenues of $1 billion or more, 65% were piloting AI agents in the first quarter of this year, up from 37% in the space of a single quarter.
Microsoft’s Azure AI Foundry, its platform for building AI agents, processed 100 trillion tokens in the first three months of 2025 (with one token representing the smallest unit of text that an AI model processes)—a five-fold increase year-on-year. At the same time, the cost per token more than halved, spurring higher use and creating virtuous cycles of innovation.
As John Chambers, the former CEO of Cisco, says, AI is this generation’s internet revolution but “at five times the speed, with three times the outcome.” Beyond the hype that haunts the sector, there are signs of enterprise AI adoption everywhere.
In his latest letter to shareholders Alex Karp, CEO of Palantir Technologies, describes a “ravenous whirlwind of adoption” of AI. IBM, which has rolled out its AI strategy to 270,000 employees, reports that AI already handles 94% of routine human resources tasks.
At Shopify, the e-commerce group, “AI usage is now a baseline expectation,” CEO Tobias Lütke said in an employee memo. The workplace automation company Zapier, which took steps to embed AI across its workforce, says that 89% of employees actively use AI in their daily work.
The list goes on—and it’s not just technology companies. JP Morgan, the world’s largest bank, has rolled out GenAI tools to 200,000 staff members, and says employees have each gained one-to-two hours of productivity each week.
AI acquisitions are plentiful
The shift from novel to mass-market tech is reflected in the business strategies of the main AI model makers, which are reimagining themselves as application companies. In the space of two weeks, OpenAI, the ChatGPT parent, appointed a CEO of Applications and then acquired IO, the AI device startup founded by former Apple designer Jony Ive, for $6.5 billion.
Meta, perceived to be behind in the AI race, has invested $14.3 billion in Scale AI, which provides data and evaluation services to develop applications for AI. Meanwhile, Apple is reported to have had internal talks about buying Perplexity AI, a two-and-a-half year-old AI model maker.
AI app security is rarely discussed
Companies are naturally focused on the potential and performance of AI systems, but it’s striking how rarely security is part of the story. The reality is that the speed of deployment of AI apps and agents is leaving companies at risk for breaches, data loss, and brand impact.
For example, an AI system or agent that has access to employee HR data or a bank’s internal systems leaves a company open to possible cyberattacks by bad actors. In business-critical applications, risks emerge at every stage of the development cycle, from choosing which AI model to use and what systems to give it access to, right through to deployment and daily use.
In our work on testing the security of AI models with simulated attacks—known as red-teaming—and creating the CalypsoAI Model Security Leaderboards, we have discovered that, despite performance improvements, new or updated AI models are often less secure than existing ones. At the same time, existing models can see their security score slip over time.
Why? Because the attacks keep progressing and bad actors learn new tricks. More techniques and capabilities of breaking or bypassing AI model securities keep being invented. Simply, the attack techniques are getting better and they’re causing AI models that have only recently launched to become less secure.
That means that organizations that begin using an AI system or agent today, but don’t stay up to date with the latest threat intel, will be more vulnerable as attack techniques increase in capability and frequency. As corporate AI systems gain autonomy and access to sensitive data, what is safe today may not be safe tomorrow.
The research firm Gartner has forecast that 15% of day-to-day business decisions will be made autonomously by agents by 2028, though that percentage may increase by then. Against that backdrop, virtually all the security protocols and permissions in enterprises are built for human workers, not for AI agents that can roam through company networks and learn on the job.
That mismatch opens up vulnerabilities, such as the possibility of agents accessing sensitive information and sharing it inappropriately. Poorly secured agents will be prime targets for hackers, particularly where they have access to valuable data or functions such as money transfers. The consequences include financial loss and reputational damage.
Final thoughts
Securing these new systems will be critical to AI adoption and to successful return on investment for the companies involved. A new security paradigm, using the capabilities of agentic AI to secure enterprise AI, is needed to allow innovation to thrive and agents to reach their potential.
While the development of AI models and systems so far can reasonably be summarized as ‘ better, cheaper, less secure, the final part of that equation must improve significantly as the emerging application-first AI era accelerates. Once that happens, Roy Amara seems certain to be proven right once again.
Donnchadh Casey is CEO of CalypsoAI.