AI - Hype or Revolution?
The term artificial intelligence was coined as early as the 1950s. However, the initially very high expectations could not be fulfilled due to a lack of computing power, so that a long so-called AI winter followed.
Since the beginning of the millennium, the basic requirements for AI have improved considerably. For example, large amounts of data are available for training neural networks via various sources, especially the Internet ("Big Data").
Computing capacities have increased exponentially and are also universally available through the use of highly parallel GPU architectures. As a third component, there have been enormous advances in algorithms.
The breakthrough in public awareness came in 2016, when the Google Deepmind team beat a Go champion with Alphago, a system made up of multiple neural networks - a development that even experts wouldn't have thought possible for another decade.
Today, we are in a similar situation with AI as we were with the Internet in the early 1990s. We know that there will be huge changes, although the exact effects cannot yet be foreseen in some cases.
And there is a similar spirit of optimism in research and industry. AI is already ubiquitous in both industry and the consumer sector.
Deep learning technologies are used in systems like Alexa and Siri to recognize spoken text, analyze questions, generate answers, and output them back as natural speech.
AI will permeate many areas. The most popular application area at present, and probably the one with the greatest research effort, is autonomous driving. This will be generally available from 2025-2030 and will revolutionize the entire logistics and personal transportation ecosystem.
Potentially, AI can be applied in areas of pattern recognition and prediction that are too complex to be described by human-created algorithms. Examples include image recognition, face recognition, translation, speech to text conversion, intelligent chatbots, predictive maintenance, and many more.
In medicine, systems that detect cancer cells or evaluate radiology images already achieve better detection rates than doctors.
AI also has the potential to disrupt many processes and business models. Companies, regardless of size and industry, would be well advised to take a close look at it now.
Companies need to gain a general understanding of the technology and possible uses across all levels of management and develop an AI strategy.
At the same time, they should gain experience with initial projects or pilots, exchange ideas with others and, if necessary, bring AI expertise into the company.
A lively scene of start-ups and small and medium-sized enterprises (SMEs) has developed in Germany that are developing innovative products and services based on AI technologies.
The goal of the AI Association is to represent the interests of these companies and to be actively involved in the discussion about the impact on society.
Among other things, this involves creating framework conditions for AI companies and cooperating with research and industry, also with the aim of building structures in Germany and Europe that can compete with providers from the USA and, increasingly in the near future, China.
It is also important that all sections of the population have a basic understanding of information technology. Among other things, a change in education policy is needed here.
It's not just that we need more computer scientists in the future, but that everyone knows the basics and interrelationships of these technologies that surround us all the time.