The global and independent platform for the SAP community.

Self-commitment of the economy is not enough

Given the rapid success of AI, companies and scientists should think about ethical boundaries now - before it's too late. If necessary, legislators will have to intervene. But do governments always have an interest in doing so?
Dinko Eror, Dell EMC
October 4, 2018
AI Column
avatar
This text has been automatically translated from German to English.

After the initial extremely positive experiences in individual areas, companies are increasingly moving toward using AI strategically, because they view AI as a competitive factor.

So does more AI lead to more success in the end? Possibly. But in the race for the best positions - whether in business or science - those involved should be skeptical of unbridled AI upgrades.

After all, not everything that is technically possible is desirable. Elon Musk, for example, sees AI as "the greatest threat to the survival of mankind," and the physicist Stephen Hawking was already of the opinion in 2014 that humans would ultimately be "displaced.

The use of AI has a particular impact when errors occur. Amazon's facial recognition system, for example, recently mistook 28 members of the U.S. Congress for prisoners.

Transfer this five percent error rate to the ambitions of the U.S. Department of Defense, and ethical doubts quickly become palpable: The Pentagon wants to equip drones and other weapons with AI so that they can identify targets themselves and "make decisions on their own."

Many AI researchers view such developments with disgust; thousands of them have signed a voluntary commitment not to research autonomous weapons systems. But what do the other thousands think?

Danger also comes from a completely different area: With frighteningly little effort, deceptively real fake images and videos can now be produced with even free apps. It's hard to imagine what would happen in the age of fake news if a fake politician declared war on a country in a fake video.

Even profiling social media users is no longer a hurdle for AI. Combined with today's computing power, the technology can analyze gigantic amounts of data and recognize patterns.

Unforgotten, for example, is the unauthorized analysis of data from numerous Facebook profiles by Cambridge Analytica with the aim of influencing the 2016 US elections. There are numerous other examples that raise ethical questions.

Even IT companies, the pioneers in AI, are starting to wonder. Microsoft, for example, sees AI-based facial recognition as a threat to privacy and freedom of expression.

So is the voluntary commitment of industry and research the right way to set ethical limits? Economic history has shown: unfortunately, no. Whether it's the diesel scandal, the smoking ban or, most recently, the standardization of charging cables for smartphones: companies have always valued potential sales advantages over ethical behavior. It will be no different with AI.

Legal regulation is therefore indispensable. The German government's announced strategy on artificial intelligence comes late, but emphasizes the need for ethical standards in many places. The EU has also recently announced an AI action paper that focuses on ethics. This is to be welcomed.

The question remains whether other governments also have an interest in limiting themselves in this sense. The USA already presented a strategic AI plan in 2016 and emphatically emphasized the topic of "ethical AI" in it.

It remains to be seen, however, how they will coordinate their announced aggressive defense plans with this. China, a country that is not particularly squeamish about privacy, as evidenced by the recent measure for ubiquitous facial recognition, will tend to give less priority to ethical aspects.

For many years, there have been calls for a new economic order. According to the vast majority of Germans, it should replace growth at any price in favor of greater justice and environmental protection. In view of the potential dangers of AI, ethics should also be high on the agenda.

avatar
Dinko Eror, Dell EMC

Dinko Eror is senior vice president and managing director of Dell EMC in Germany.


Write a comment

Working on the SAP basis is crucial for successful S/4 conversion. 

This gives the Competence Center strategic importance for existing SAP customers. Regardless of the S/4 Hana operating model, topics such as Automation, Monitoring, Security, Application Lifecycle Management and Data Management the basis for S/4 operations.

For the second time, E3 magazine is organizing a summit for the SAP community in Salzburg to provide comprehensive information on all aspects of S/4 Hana groundwork.

Venue

More information will follow shortly.

Event date

Wednesday, May 21, and
Thursday, May 22, 2025

Early Bird Ticket

Available until Friday, January 24, 2025
EUR 390 excl. VAT

Regular ticket

EUR 590 excl. VAT

Venue

Hotel Hilton Heidelberg
Kurfürstenanlage 1
D-69115 Heidelberg

Event date

Wednesday, March 5, and
Thursday, March 6, 2025

Tickets

Regular ticket
EUR 590 excl. VAT
Early Bird Ticket

Available until December 20, 2024

EUR 390 excl. VAT
The event is organized by the E3 magazine of the publishing house B4Bmedia.net AG. The presentations will be accompanied by an exhibition of selected SAP partners. The ticket price includes attendance at all presentations of the Steampunk and BTP Summit 2025, a visit to the exhibition area, participation in the evening event and catering during the official program. The lecture program and the list of exhibitors and sponsors (SAP partners) will be published on this website in due course.