14.06.2017 change 14.06.2017

Expert: Clashes with AI are battles that people fight against themselves

We are afraid that artificial intelligence (AI) will be better than us, but the duels with game masters are really still battles that people fight against themselves" - told PAP Dr. Aleksandra Przegalinska, a philosopher and artificial intelligence expert.

PAP: Recently there were reports in the media about the computer program AlphaGo\'s triumphs over Go masters. It is also often said that the next challenge for artificial intelligence will be the complicated strategy game "Starcraft". Why are we so fascinated by the reports of victories of artificial intelligence over people?

Aleksandra Przegalińska, Kozminski University and the Massachusetts Institute of Technology (MIT): The prevailing view remains that as a species we are kings of the world and our competence - in any given area of intelligence - is very high. That makes the moment we are defeated a shock. This is a very interesting phenomenon for me as a researcher of artificial intelligence.

On the one hand, we are afraid of artificial intelligence, we fear that it will be better than us. And that is slowly coming true: it started in the 1990s with Kasparov\'s games against Deep Blue, now increasingly advanced and increasingly difficult games are within reach of machines. But, even as we are afraid of the dominance of artificial intelligence, it somehow attracts us. We want to see if this actually can happen. I think the whole artificial intelligence project is so attractive because it gives us the power to create. Let\'s not forget that while AlphaGo plays Go, it really is backed up by a whole team of people who have trained this machine for a long time. I would say that in a sense this is still people playing against people. Or else: a paradigm of thinking about how to learn to play against another paradigm. So I have the impression that the battle for artificial intelligence is, in the end, a battle that people fight against themselves.

PAP: What is the difference between the way of "thinking" of machines and humans? Can it be defined at all?

A.P.: This is very difficult to answer. It is certainly true that computational intelligence has its own specifics. We have recently heard that children are supposed to be taught algorithmic thinking - that, of course, means that so far we have had less such thinking. This way of thinking can be mastered by people - it is taught even during a simple logics course, which I attended in while studying philosophy. However, although this way of thinking is understandable to us, it does not come naturally. As people, we use various shortcuts, we add context to each information. There are data from the world that are not very clear to a machine, but they are clear to us - because of the context. On the other hand, the data that are very clear and easily operable for a machine often exceed our capabilities. We do a lot better in "noisy situations": where there are a lot of different, difficult to clean data. My conversation with you is an example of such situation: we both know more or less how to behave, we know all these codes, we know at what height we should hold the microphone so that it can catch my answers. A machine would struggle with such details. So there are things that are very simple for us, such as body intelligence, emotional, social intelligence, contextual understanding - these things are extremely difficult for a machine. This is exactly where we are different - but that does not mean that there are no points of contact. Otherwise, man would not have invented machine learning mechanical training.

PAP: Speaking of which - how do machines learn?

A.P .: In the field of machine learning today we have several different paradigms of how to train a machine to analyse different types of data and how to lead the machine to correct conclusions. We can mention, for example, reinforcement learning with rewards and punishment, unsupervised learning - the correct answer to the problem is unknown, which gives the computer more freedom. Now there is also deep learning - a method that uses multi-level neural networks to create systems that can detect certain features in large amounts of unlabeled data.

PAP: What are the typical features of today\'s artificial intelligence?

A.P.: We are undoubtedly experiencing some form of fascination with biological systems. This means that in the field of artificial intelligence we have looked at how living organisms learn about the real world. Remember that this was not always the case. At the dawn of artificial intelligence, we focused on building micro-worlds and systems with internal representations of reality - the machine was moving only in this inner world. Then we decided that it would be better for the machine to learn through experience - already imitating our own forms of learning, for example through such simple conditioning as punishment and reward.

PAP: Can we create a truly intelligent machine by imitating our own forms of learning?

A.P.: First of all, it is important to remember that in our discipline there is a division: artificial intelligence, which is an existing and implemented project, and the so-called artificial general intelligence (AGI), which does not exist at this point. The Terminator is not waiting for us around the corner.

PAP: What would we need to create such a peculiar machine?

A.P.: I would answer - note that this is still the answer given by a person who determines the machine through herself and her cognitive abilities - that we would need self-consciousness and some form of subjectivity. A will to perform a task, a desire to do something would be necessary - perhaps aroused by curiosity. In order to create such a structure, it is necessary to have an +I+, which is somehow self-reflexive.

PAP: So the machine would have to be self-conscious?

A.P.: Some biologists would argue that for simple organisms, self-reflexivity or self-consciousness is negligible - yet they do function in the world. I would still say that to create a fully intelligent machine, which would additionally have this computing component, consciousness is necessary. This is really a big and difficult problem, as we are only able to simulate the phenomena that we fully understand in a machine. It may happen that some machine in the course of its development, acquisition of complexity, will simply make a phase transition, as the physicists call it, suddenly become self-aware, without our participation. But it may also happen that we will develop neuroscience, brain research - and in the course of these activities we will learn to simulate consciousness in a he machine - so there are several possible paths.

Interview by Katarzyna Florencka, PAP - Science and Scholarship in Poland

kflo/ agt/ kap/

tr. RL

Partners

Copyright © Foundation PAP 2017