
I recently had an unsettling rideshare experience. Let me paint a visual picture. You have a Tesla doing its self-driving thing, a guy just sitting in the driver’s seat “supervising,” and a terrified human (me) in the backseat looking on in horror.
Finally, I said, “Please keep your hands on the wheel when you’re driving me, OK?”
Tesla’s autonomous functionality might be safe, but I don’t have enough trust yet to allow a Tesla to get me from Point A to Point B without a human steering it.
There’s a parallel between self-driving cars and the current perceptions of AI and agents. You might be comfortable letting one of these automobiles make a simple right-hand turn, but turning left to cross through busy traffic? Probably not so much.
We’re in the early days of agentic transformation, which describes the shift from traditional software to a more autonomous enterprise that relies on software that can act independently. Businesses are eager to embed agents in processes to make operations more efficient. Yet we’re in a remarkably similar place to autonomous vehicles with our level of trust, or lack thereof.
Implementing simple agentic pilot use cases are one thing. But yielding control of critical workflows is another.
WHAT RESEARCH TELLS US
This trust gap isn’t just a hunch on my part. It’s backed by respected research.
In a recent KPMG International study of more than 48,000 people in 47 countries, 66% said that while they use artificial intelligence, 54% were unwilling to trust it. A McKinsey & Company report cited something similar, calling it the GenAI paradox. It found that almost eight in 10 companies use generative AI, but the same number has not seen any significant bottom-line impact. This is why the biggest AI challenge isn’t technical, the report stated. Instead, “It will be human: earning trust, driving adoption, and establishing the right governance to manage agent autonomy and prevent uncontrolled sprawl.”
AI adoption is happening, but it’s not happening with great confidence. Every business leader should also think about a Pew Research Center study that found a vast chasm between the views of AI experts and the general public. The public is far less optimistic and enthusiastic about the technology.
Bridging this skepticism divide will be the difference between success and failure for businesses as they “agentify” their operations.
So, we still don’t fully trust AI to make meaningful decisions for us. We’ve all heard the stories of AI hallucinations and businesses rolling out initiatives that, in hindsight, weren’t ready for prime time. Caution is not the same thing as moving slowly. That said, the companies that figure out how to adeptly use agents in their businesses sooner will be the ones that move faster, execute smarter, and operate leaner.
Building trust is the key that unlocks it all.
WHY TRUST IS THE HARD PART
Of course, the trust in technology issue predates AI’s arrival. I’ve worked in software for three decades. For a significant portion of that time, the focus has been on digital transformation to make businesses more efficient through digitizing processes. One of the biggest obstacles companies have long faced is a deep distrust of their data.
Incomplete, inaccessible, or inaccurate data can fundamentally paralyze organizations. Businesses don’t know what to trust. When facing critical decisions that can impact their companies’ trajectory, some of the most intense leadership team discussions are whether they believe what the data tells them. I’ve been part of those conversations.
Now, as we shift into agentic transformation, if you don’t get the data right, AI can make the problem 10 times worse. That’s because AI models and agents use the data available as fuel to make decisions and generate outputs based on probabilities and likelihoods in a “black box” environment that can lack transparency. Because AI responses and actions are based on those probabilities, it will never be 100% accurate. (Much like humans, by the way.) But AI can be made as trustworthy as possible. It’s all predicated on:
- Accurate data.
- Access to information.
Without setting a strong foundation for managing data and fully connecting systems so that information moves where needed, the conviction required to support your AI initiatives will be lacking. You’ll understandably have doubts about the actions that agents are taking within your organization and externally on behalf of your business.
WHAT EVERY LEADER SHOULD CONSIDER
Change only happens at the speed of trust. If we believe in something, we’ll use it. Building confidence in AI models and agents requires control and governance. It starts with the foundation I mentioned: well-managed data and well-integrated systems. Solving the age-old “garbage in, garbage out” problem of poor data is a crucial first step. It will give AI what it needs to make more accurate and responsible decisions.
Then there are the agents themselves. We’ve reached a point where every organization can build agents, and every vendor is making them part of their products. But something else is more essential—managing them.
You need to know about every agent in your operations. You’ll need visibility into what they’re doing and how they’re performing their assigned tasks. If they’re not acting as expected, you must be able to fix the issue quickly.
These guardrails are the backbone upon which trust is nurtured as this new agentic world evolves and matures. For leaders, this requires striking the right balance between championing speed and responsible innovation in ways that enable AI to enhance efficiency while amplifying human capability. That’s because human interactions—helping customers, managing employees, and so on—will always be those more challenging “left-hand turns” where we want people to make decisions requiring empathy and judgment.
Just as we’re moving toward a self-driving car that inspires unequivocal trust, we’re also on a path to metaphorically creating the self-driving enterprise with agents. For now, though, most of us aren’t yet willing to take our hands off our business’s steering wheel.
Agents have to earn that kind of trust through governance.
Steve Lucas is chairman and CEO of Boomi.