When it comes to artificial intelligence risks, it's better to ask more than just the EU
Artificial intelligence (AI) is revolutionizing the economy and society. The USA is leading the way, while China is rapidly catching up - and is investing a good 150 billion US dollars by 2030. Europe must therefore be careful not to fall behind.
The announcement that the EU now intends to invest 20 billion euros per year in this future technology should be understood in this sense. This is stated in its "White Paper on Artificial Intelligence", which the EU Commission recently published. The sum is to be raised jointly by the EU, member states and companies - it remains to be seen how.
In addition to financing, the AI White Paper also deals with a number of other fundamental issues, such as the risk of using AI. The EU only makes vague comments on this aspect - and is thus virtually calling for further consideration of risk minimization, which should include the prerequisites for a sensible AI application as well as starting points for companies.
The White Paper states: The necessary trust should be provided by AI legislation that corresponds to the risks but does not prevent innovation. The EU sees a high risk in the healthcare, transport, police and legal system sectors.
An AI solution is classified as critical if it is likely to have legal implications, endanger life, cause damage or injury. The EU paper cites medical technology, automated driving and decisions on social security benefits as examples of risky uses of AI.
The EU is now calling for strict regulations governing conformity testing, controls and sanctions to ensure that "AI systems with high risk are transparent, traceable and under human control". The Commission argues:
"Authorities must be able to check AI systems in the same way as cosmetics, cars and toys." Other AI applications could be labeled voluntarily. Thinking further, the regulations mean for the healthcare sector, for example: Only expert systems may be used.
They make decisions according to defined rules. They work transparently, but do not recognize patterns in X-ray images and do not learn. The situation is different with AI applications for machine learning that use neural networks.
A neuron is modeled as a function, including input, parameters and output. Photos, texts, numbers, videos or audio files are used as data input. These train the model to independently recognize patterns, deliver better results and ultimately evaluate unknown data.
Comprehensible certification is the key to creating the necessary trust in AI systems. It should not be the sole task of politicians to define the criteria for certification.
Business and research are also in demand. The EU has provided the template for this with its AI White Paper. Everyone involved should also think about what happens if the intended use changes.
Companies have a duty to establish a suitable data infrastructure and expert knowledge for the sensible use of AI. Those who fail to do so will lose out significantly on their ability to innovate. This is the AI risk that companies themselves are responsible for - and can shape.