What's next in terms of technology

KI: The next winter is definitely coming

The general public still has a very diffuse image of what AI technologies actually are - and what they can actually do. Even the term "artificial intelligence" quickly arouses false associations, up to scary scenarios like "Terminator" or "iRobot". In fact, AI is about something that seems less “sexy”: rudimentary learning methods for computers - “machine learning” (ML).

Even this term is still misleading, because ML has little in common with learning as we know it. People are able to do it quickly and with relatively few significant examples Connections to understand and based on the resulting mental model logical conclusions to pull. Machines can't do that. The current procedures need endless amounts of examples in high quality, which also cover as many cases as possible and have to be repeated thousands of times in order to achieve useful results. And even then, there was no understanding of the content, just a mathematical approximation function that maps inputs to outputs. It has nothing to do with intelligence.

The possibilities of machine learning

Without a doubt, ML has made amazing strides in recent years and has cracked problems that could not be solved algorithmically for decades. On the other hand, the core processes used have been known for decades and are only really practicable now because of their hunger for computing and memory:

  • Supervised learning is the most frequently and most successfully used procedure. A system is presented with data pairs over and over again, consisting of the input and the desired result. The main areas of application are classification and regression. Systems already exist that can recognize any objects and living beings in pictures or videos and even specific people and their emotions. AIs trained in this way are also used in medical technology applications, for example by analyzing X-ray images for evidence of hidden tumors. The same applies to language processing, e.g. B. in relation to speech recognition and generation as well as the analysis of texts.
  • Unsupervised learning In contrast to supervised learning, it does not use data pairs that belong together, but rather pure input data for training, which are often available in large quantities, e.g. B. Shopping carts in e-commerce. Accordingly, the type of problem that can be solved is also different: It is mainly about clustering, dimension reduction and the detection of anomalies. The most spectacular application of unsupervised learning currently is the generation of artificial content. To do this, two systems are combined, the first generating something (the generator) and the second evaluating the result (the discriminator) and providing feedback. Is the discriminator z. If, for example, trained with supervised learning to recognize landscapes, the generator can learn to generate images of landscapes from random inputs. These Generative Adversarial Networks (GANs) work so amazingly well that you can no longer trust any picture or video (keyword: deepfakes).
  • Reinforcement learning goes in a similar direction as GANs: A system is not pre-trained directly with the correct results, but rather learns in operation to make the right decisions on the basis of positive or negative feedback. Despite some impressive examples such as self-learning computer game simulators (e.g. AlphaStar) or applications for predicting protein folding for the development of new drugs (e.g. AlphaFold), reinforcement learning is still in its infancy compared to the other two approaches. This method can be used for problems that require continuous control and for which results can be evaluated directly on a scale in a meaningful form - which often turns out to be difficult.