A machine can act intelligently

Artificial intelligence: Of people and machines and machines that act like people

From the lessons of the CAS Multichannel Management with Dr. Marcel Blattner reports to Maria Paula Marxer

Machines can now read people's thoughts and display them in a picture.

Film or reality?

The answer is real. Machines are able to measure brain waves and reconstruct them in images thanks to artificial intelligence (AI).

How new is artificial intelligence?


AI is not a new technology as the current hype might suggest. Research was carried out in this area as early as the 1950s and the media reported about it. At that time, people were concerned with basic questions: Will machines do our jobs? Will machines also be able to think in the future? There is no right or wrong answer to this.
The development of technologies is strongly influenced by cultural and economic aspects, because these are developed and programmed to meet the needs of society. So one cannot speak of neutral technologies.

How is AI perceived?


Perception is usually influenced by people who have no in-depth knowledge of the subject, and this leads to polarizing opinions: Artificial intelligence will save or destroy the world! But culture and origin also play a major role: Western countries are pessimists and associate AI with war, robots that take over our jobs and at some point turn against us and eliminate us. In Asian countries, on the other hand, especially in Japan, they have no inhibitions, they feel comfortable and consider the use of robots to be positive. The difference may be due to animism, which is part of the Shinto belief. Animism is the idea that all objects have a mind. The fact is that we have lived surrounded by AI-powered machines for more than 50 years.

How can you define artificial intelligence?

There is no single and strict definition. We are talking about an interdisciplinary research area which aims to create machines that solve work in a meaningful way. The research aims to answer questions such as:

  • Philosophy: How does knowledge arise in our brain? What triggers an action?
  • Math: How do we process inaccurate information?
  • Economy: How can we use machines to make money?
  • Neuroscience: How Does Our Brain Process Information?
  • Psychology: How do humans and animals think and act?
  • Computer engineering: how can we build an efficient computer?

The four categories according to Russel & Norvig

  1. Human thinking →The machines should think like humans: the cognitive system of the brain
  2. Human action →The machines should act like humans: Turing test approach
  3. Rational thinking →The machines should think logically and intelligently: to reach logical conclusions.
  4. Rational action →The machines should act logically and intelligently: achieve the best result.
Classification of Artificial Intelligence
  • Strong Artificial Intelligence: Machines have the same cognitive abilities as humans. (Possibly also an awareness.)
  • Weak Artificial Intelligence: Machines can solve very specific tasks in a clearly defined environment. (Possibly better than humans.)

Methodology - specialization

AI - Artificial Intelligence: Any technology that allows a computer to mimic human behavior.

ML - machine learning: Subset of AI techniques that use statistical methods to enable machines to improve through experience.

DL - Deep Learning:Subset of ML techniques that enable the computation of intermediate layers between input and output. This leads to a more stable learning success of the machine.

How does a machine learn?


There are three different ways you can teach a machine something:

Supervised learning


In supervised learning, each example is a pair that consists of an input object (data) and a desired output value (also known as a monitoring signal). A supervised algorithm analyzes the training data and generates a derived function that can be used to map new examples. 95% of the machines are trained in this form. For example, a machine can use learned images to recognize objects, numbers, animals and represent them again graphically. The results are given as a percentage.
Now we were able to experience how a mobile app recognizes objects with its integrated camera using a real example. The picture shows how the app identifies 57.4% of the chair.