Risk with ERP AI from SAP
Simple finance fraud after Cum-ex
To put it simply, the perpetrators of the Cum-ex tax fraud read and analyzed the relevant tax laws until they found a supposed loophole. Many experts were unsure about the assessment for many years; only a comprehensive review classified this procedure as tax fraud.
Put simply, a large language model (LLM), the foundation of almost all generative AI, does little else. Billions of pieces of data are read and statistically recorded; AI has more to do with statistics than many users believe. The result is an answer based on the processed data. You can imagine that a generative AI, after studying the relevant literature, would probably have discovered the tax loophole as well.
What kind of answers can an S/4 user expect when a large-language model is supplied with data from all SAP Simple Finance installations? The answer does not have to be morally or economically correct. It does not even have to be legal, see cum-ex tax fraud. But the answer will of course be statistically correct in terms of the AI algorithm and the LLM.
Generative AI without a meta level
Of course, generative AI cannot be a moral compass, but a few days ago by Google's Gemini AI software demonstrated just how far AI answers can stray from reality—despite the best intentions, transparency, and diversity.
The Google program was asked to show soldiers in German uniforms from World War II. In the age of diversity, this resulted in images of soldiers with Asian facial features and dark skin. Another request resulted in female popes—perhaps wishful thinking, but not reality. Google has promised to revise the AI image generator. Read more in the German-language magazine Handelsblatt’s website here. here to read.
Many AI programs lack a meta-level that takes corrective action. A simple algorithm on the history of the Catholic Church would have made a female pope very unlikely, despite all sympathy for diversity! The real world cannot be explained and represented by statistics alone—even if these functions are extremely sophisticated and demanding: a Ph.D. for the AI is the minimum requirement.
SAP shirks all responsibility
AI also plays a crucial role in SAP's ERP world. SAP invested in the German startup Aleph Alpha, but failed to provide answers on how AI customizing could work with S/4. To many experts, this sounds like "we want to, but don't dare”. SAP does not currently provide any kind of risk assessment or risk avoidance. What happens if the Simple Finance AI makes irregular recommendations?
The DSAG (German-speaking SAP user group) is apparently less critical of SAP's shifting of responsibility to users. Apparently, it is enough to provide tools. Whatever mischief users might do with them should be their own responsibility, right?
In this context, the DSAG sees the announcement of a generative SAP AI hub at the last SAP TechEd in Bangalore, India, as positive. The existing SAP AI Core and SAP AI Launchpad on the SAP Business Technology Platform (BTP) will be extended by this generative AI Hub to control the connection to external LLM models, initially Open AI to be precise. Billing via a separate pricing model (AI units) is also in the works. To summarize: SAP facilitates the use of ChatGPT (Open AI), charges for it, but assumes no liability or responsibility.
SAP's new focus on business AI is also evident in the launch of SAP Joule. This voice-controlled AI assistant based on generative AI is designed to understand business context and be integrated directly into the cloud portfolio for business-critical processes of user companies. In his keynote speech at the DSAG Annual Congress 2023, Thomas Saueressig, SAP board member for product engineering, had already specified this focus on business processes in connection with AI. According to the software maker, it will increasingly rely on partners such as Open AI. What SAP did not mention at the DSAG Annual Congress 2023 in Bremen, however, is the assumption of responsibility, the presentation of risks, and the prevention of damage if Joule gives a corrupt or illegitimate answer. What then?
"In principle, we welcome this strategic direction from SAP—especially in light of the dynamic developments around big models such as Open AI, etc. However, we still have a few unanswered questions from a commercial, professional, and technological perspective," says DSAG CTO Sebastian Westphal, urging for a little more responsibility and risk assessment. It must be verifiable that valid guidelines are implemented and documented when AI makes process decisions. From a technological perspective, it would be important to know how sensitive company information is handled when AI is used. How does the AI learn? What does an AI use? This is what SAP should focus on next: indirect use.