Self-commitment of the economy is not enough
After the initial extremely positive experiences in individual areas, companies are increasingly moving toward using AI strategically, because they view AI as a competitive factor.
So does more AI lead to more success in the end? Possibly. But in the race for the best positions - whether in business or science - those involved should be skeptical of unbridled AI upgrades.
After all, not everything that is technically possible is desirable. Elon Musk, for example, sees AI as "the greatest threat to the survival of mankind," and the physicist Stephen Hawking was already of the opinion in 2014 that humans would ultimately be "displaced.
The use of AI has a particular impact when errors occur. Amazon's facial recognition system, for example, recently mistook 28 members of the U.S. Congress for prisoners.
Transfer this five percent error rate to the ambitions of the U.S. Department of Defense, and ethical doubts quickly become palpable: The Pentagon wants to equip drones and other weapons with AI so that they can identify targets themselves and "make decisions on their own."
Many AI researchers view such developments with disgust; thousands of them have signed a voluntary commitment not to research autonomous weapons systems. But what do the other thousands think?
Danger also comes from a completely different area: With frighteningly little effort, deceptively real fake images and videos can now be produced with even free apps. It's hard to imagine what would happen in the age of fake news if a fake politician declared war on a country in a fake video.
Even profiling social media users is no longer a hurdle for AI. Combined with today's computing power, the technology can analyze gigantic amounts of data and recognize patterns.
Unforgotten, for example, is the unauthorized analysis of data from numerous Facebook profiles by Cambridge Analytica with the aim of influencing the 2016 US elections. There are numerous other examples that raise ethical questions.
Even IT companies, the pioneers in AI, are starting to wonder. Microsoft, for example, sees AI-based facial recognition as a threat to privacy and freedom of expression.
So is the voluntary commitment of industry and research the right way to set ethical limits? Economic history has shown: unfortunately, no. Whether it's the diesel scandal, the smoking ban or, most recently, the standardization of charging cables for smartphones: companies have always valued potential sales advantages over ethical behavior. It will be no different with AI.
Legal regulation is therefore indispensable. The German government's announced strategy on artificial intelligence comes late, but emphasizes the need for ethical standards in many places. The EU has also recently announced an AI action paper that focuses on ethics. This is to be welcomed.
The question remains whether other governments also have an interest in limiting themselves in this sense. The USA already presented a strategic AI plan in 2016 and emphatically emphasized the topic of "ethical AI" in it.
It remains to be seen, however, how they will coordinate their announced aggressive defense plans with this. China, a country that is not particularly squeamish about privacy, as evidenced by the recent measure for ubiquitous facial recognition, will tend to give less priority to ethical aspects.
For many years, there have been calls for a new economic order. According to the vast majority of Germans, it should replace growth at any price in favor of greater justice and environmental protection. In view of the potential dangers of AI, ethics should also be high on the agenda.