Agentic AI systems are expected to generate immense economic value, yet widespread organizational adoption is being held back by a significant and growing trust deficit. According to new research from Capgemini, AI agents are poised to generate a total economic value of $450 billion by 2028 in surveyed countries, a figure that includes both revenue uplift and cost savings. Despite this potential, the report, titled Rise of agentic AI: How trust is the key to human-AI collaboration, reveals that trust in autonomous systems is declining sharply, and most organizations are still in the earliest stages of implementation.
The state of adoption and economic potential
While the benefits of agentic AI are clear—including elevated operational efficiency and accelerated business growth—the current state of adoption shows that the industry is still in its infancy. Only 2% of organizations have implemented AI agents at scale, with another 12% having achieved partial-scale implementation. The majority are much further behind: 23% are in the process of piloting some use cases, 30% have just started exploring the potential, and another 31% are only considering experimenting within the next six to twelve months.
When organizations do choose to implement this technology, most prefer to use agents that are already available within existing enterprise solutions. The research indicates that 62% of organizations prefer to partner with solution providers like SAP or Salesforce and use their in-built agents. A smaller but significant group, 53%, opts for a hybrid approach that combines in-house development with external solutions. The business functions expected to see the greatest adoption of AI agents are customer service, IT, and sales. Looking ahead, organizations expect the role of these autonomous systems to grow, with the percentage of processes handled by them projected to increase from 15% in the next year to 25% within one to three years.
A crisis of trust and readiness
Despite the clear potential, several significant barriers are impeding the widespread adoption of agentic AI. The most critical of these is a sharp decline in trust. The share of organizations expressing trust in fully autonomous AI agents has plummeted from 43% to just 27% in the past year alone. This erosion of confidence is coupled with significant ethical concerns; nearly two in five executives believe the risks of implementing AI agents may outweigh the potential benefits.
This trust deficit is compounded by a lack of internal knowledge and technical readiness. Only half of the organizations surveyed report having adequate knowledge and understanding of AI agents and their capabilities. The technological foundation required to support these systems is also lagging. Fewer than one in five organizations report having high levels of data readiness, and a vast majority—over 80%—lack a mature AI infrastructure.
To overcome these issues, the research suggests focusing on human-agent collaboration as the key to building trust. Organizations report that integrating human involvement with processes handled by AI agents delivers significant benefits, including a 65% increase in employee engagement on high-value tasks and a 53% increase in creativity. The dominant model for this collaboration is also expected to evolve. In the next 12 months, most organizations envision AI agents augmenting human team members. However, within one to three years, the prevailing model is expected to shift toward AI agents acting as integrated members within human-supervised teams.
Featured image credit