

AI agents are undoubtedly on the rise. Their potential to transform companies is enormous. Experts around the world are convinced of this. In a representative study, Boomi surveyed 300 business and technology executives, including some from Germany. Almost three quarters (73%) believe that AI agents will be the biggest change for their company in the last five years.
However, the study also shows that only two percent of the AI agents currently in use are fully responsible for their actions and are subject to continuous, consistent governance. Conversely, this means that
98 percent have no or at least inadequate governance rules. And this is precisely where the danger for companies lies, because without control, the performance of AI agents cannot be properly channeled.
Responsibility for AI agents
Until recently, the prevailing opinion was that critical business areas such as managing security risks or approving investments and budgets required human expertise alone, but this has changed with the rapid development of AI agents. Managers are now increasingly willing to entrust these areas, at least in part, to an AI agent.
The enormous responsibility this places on this technology is clear. Managers and IT teams are no longer aware of all the ways in which sensitive data is being used by the technology and this can lead to potential breaches of security or compliance regulations. For any organization, the uncontrolled autonomy of AI agents is an unacceptable risk.
However, the current standards for the governance of AI agents are inadequate. Often, even the minimum requirements for a governance strategy for AI agents are not met. For example, less than a third have a governance framework for AI agents and only 29% offer regular training for employees and managers on the responsible use of AI agents.
When it comes to specific processes such as bias assessment protocols or action planning for AI agent failures, even fewer companies are prepared (only around a quarter in each case). Companies therefore need to start treating digital employees (AI agents) in the same way as human employees. For the latter, it is common practice to check their skills and their past for ethical violations. AI agents need to be held to the same standards and be screened to see if they have a history of bias or hallucinations, for example.
Universal governance is mandatory
The universal governance of AI agents is not just a nice extra. It's essential for data security and improving business performance. Companies with advanced governance perform better on a variety of key business metrics than those with only a basic level. They also protect themselves against compliance breaches, reputational damage and, ultimately, a security incident triggered by untested AI agents with significant financial damage. After all, in the current competitive environment, even a small advantage can make the difference between being ahead and falling behind.
Source: Boomi




