Will Artificial Intelligence lead to a better life or end up controlling us?
With AI (artificial intelligence) systems getting better and better in various areas, sometimes outdoing humans, there are growing concerns that computers will soon dominate us or people will connect their brains to AI.
Dr. Krzysztof Walas from the Institute of Robotics and Machine Intelligence at the Poznań University of Technology talks to Science in Poland about the opportunities and threats related to AI.
Science in Poland: Elon Musk has just warned that in five years artificial intelligence will become smarter than people. From that moment on, it is supposed to start developing exponentially at such a pace that we will be unable to keep up with it. Is this possible?
Krzysztof Walas: I think that such a scenario is possible and most people predict this course of events. The differences concern the time when it will happen, whether it will be in a few or several dozen years. There is a dispute over this issue, not only in the scientific community, but also in business, which increasingly uses artificial intelligence.
SiP: Will such systems encounter any limitations related, for example, to energy consumption or equipment capabilities? If biology hasn't developed the brain more for some reason, maybe electronic systems won't be able to do that either? On the other hand, products of nature do not fly into space, while rockets do...
K.W .: Perhaps we will need new processor architectures. There is a lot of talk now about quantum computers that would completely change the computing paradigm. It may even turn out that we will look at today's computers - processors, memories, hard drives a bit like we now look at some technological products from the beginning of the 20th century. Change can be total. For example, energy barriers may turn out to be completely different than we think today.
SiP: But intelligence is not the same as awareness. We do not really know what it is and how AI will function without it. Psychology indicates, however, that low self-awareness tends to lead to trouble and destructive actions.
K.W.: The debate on AI awareness could last several days. It is a complex philosophical problem. The very definition of awareness is problematic because how do we know whether someone is aware or not. How do we apply this to a machine? Scientists argue about the stage at which biological organisms can become conscious. Transferring this dispute to artificial beings is even more problematic.
SiP: Aware or not, AI can be a threat, some say. But for now artificial intelligence recognizes faces, learns to make medical diagnoses and drive cars. There are no big dangers here.
K.W.: We can already observe that AI has already beaten us in some narrow areas, because it can handle large data sets very well. But such programs have very specific, limited tasks. On the other hand, specialists are debating whether a system can be created that will have General Intelligence and what will happen when such a system becomes more intelligent than people, whether it will threaten us.
SiP: The question is how? After all, we are not likely to let it decide on launching nuclear warheads, like in The Terminator.
K.W.: Artificial intelligence already makes some decisions about our lives, although we may not be aware of it. For example, it often decides whether someone gets a bank loan. Based on your and other people's behaviour, AI systems recommend movies that you might like to watch, select news for you, recommend purchases. In cars, such as the Tesla, many decisions are made by the autopilot. If AI is allowed to learn from random people, it can become hostile. This was the case with the chatbot, a conversation program, one of the implementations of which was presented by Microsoft. The company disconnected it because it learned things like racism from users. There is a lot of hate on the Internet. What if an intelligent system learned how to interact with people only by analysing online forums?
SiP: Yes, but these threats do not seem serious. Chatbot did not have access to weapons or other tools that intelligent computers could use to harm people.
K.W.: I see two possibilities here. Firstly, such systems can heavily affect people by non-physical means. They could manipulate them, for example with the help of social media, even on a mass scale. They could, for example, persuade people to hate their neighbours or another ethnic group. Imagine a situation like this in a country with easy access to weapons. A clever program could theoretically manipulate some people to do something wrong. The computer may not have access to weapons, but it has access to the people who use them.
SiP: And the second option?
K.W.: The second option concerns physical actions of robots connected to the network. With the help of the Internet, for example, algorithms could be changed and such machines could do something that would harm people. I myself sometimes think that if a completely autonomous car kidnapped me, there would be nothing I could do. Of course, this is a worst case scenario. For now we are talking about theoretical, but possible threats.
SiP: Super-intelligent computers could theoretically do various things, also because we would not be a match for their mentally. Elon Musk claims that in order to prevent this, we ourselves should connect to artificial intelligence with chips. Systems such as the Neuralink developed by his company are to serve this purpose.
K.W.: In a way, we are connected to a computer every day, only with a very slow interface - the keyboard. The difference between this and an implantable chip will be like the difference between walking and driving. The flow of information will be much faster.
SiP: On the one hand, this may mean easy access to artificial intelligence resources and computers in general, but on the other hand - a greater possibility for humans to be manipulated. The more so that such chips would even regulate the secretion of certain hormones.
K.W.: That is correct. The primal structures of the human brain react faster than the higher ones, responsible for rational thinking, because they were evolutionarily formed earlier. They are responsible, for example, for the so-called fight or flight response. Because they work so fast and strongly, people often emotionally do something that they later regret and apologize for. Influencing the level of hormones would affect these centres and theoretically could be potentially dangerous.
SiP: Can we, as mankind, protect ourselves?
K.W.: We are talking about threats, but remember that AI is, above all, a very useful tool. You just need to handle it properly. For example, the European Union is developing a code of ethics to regulate AI development. This puts us in a slightly worse market position than, for example, the US or China, but this is probably the right thing to do.
SiP: So artificial intelligence needs to be raised well?
K.W: There are ideas that we should teach it ethical things, show it choices leading to happiness, and to do good. Then, even when it surpasses people's capabilities, perhaps its reward will be working for the good of mankind, and not something else that could harm us.
SiP: Even then, we could find ourselves under its control to some extent. People usually don't like this idea.
K.W.: It might happen like that, although its development could be managed in such a way that it would become a service and advisory system for people.
SiP: Can such assumptions be implemented so effectively that no dangerous system gets out of hand?
K.W.: It can be difficult, as shown by the history of various medical studies. There are people who think that if something can be done, they will do it regardless of the project's ethics. In conclusion, I think that it is not possible to stop the development of artificial intelligence, because the whole world would have to do it, so that no country or corporation would gain a strategic advantage just because they did not refrain from doing research. However, it is important to work intensively on the implications of its application now.
Interview by Marek Matacz, PAP - Science in Poland
mat/ agt/ kap/