Artificial intelligence can be tricked
A sticker on a road sign may mislead an autonomous car, and a voice assistant confused by a coded signal may order a transfer of your money to a selected account. Attacks on self-learning systems have more to do with everyday life than you might think.
A SYSTEM CHEATING A SYSTEM
Dr. Rafał Kasprzyk from the Military University of Technology in Warsaw said: “Artificial intelligence observes us, listens to us and learns about us - whether we want it or not. In the world of Web 3.0, intelligent systems are used on every web portal. Depending on what we are looking for online, what we buy, what we talk about and write about on our smartphones, content is selected for us. Social networks suggest friends, online stores suggest products and services that we +need+.”
But he points out that these well-functioning devices can be tricked. The way to cheat artificial intelligence is described in scientific language as antagonistic, i.e. opposing machine learning. In short, the second system learns how the first system (that also learns) works.
Thanks to this procedure, the machine may 'swallow' maliciously crafted input data. For a human, such data are clear and not misleading. The machine, however, can be fooled. The scientist explains it on the example of autonomous cars.
He describes that cars such as Tesla stay in their lanes very well and recognize numerous road signs, even those on our roads. Even if the weather conditions were bad, and someone painted the STOP sign green, sensors would activate the accident protection mechanisms.
But if the system is carefully attacked, then, for example, it is enough to stick an inconspicuous element on the STOP sign for the car to perceive the sign as the information: 'right-of-way road'. It is not difficult to imagine the consequences: an unexplained series of crashes.
TWO WAYS OF MACHINE LEARNING
Dr. Kasprzyk explains that the knowledge of an expert can be used in the construction of intelligent machines. A person - a specialist in a given field - determines how the machine should function. Then a computer scientist 'translates' it into code in the selected machine programming language and develops clear rules that determine how the machine will work.
It can be compared to a situation, in which we programme a coffee machine ourselves so that, without having to set everything up every time, it serves your preferred coffee at your favourite time.
The second way is more abstract because it involves a huge amount of data. No expert or group of experts could grasp the mechanisms of managing them. And yet, although the rules governing the machine are not explicit and known to scientists, a system programmed by man to learn works properly, often much better than man does.
This approach, in turn, can be compared to a situation where a coffee machine automatically makes the right coffee. It takes into account the time, temperature and humidity in the room, the day of the week and maybe something more? It learns based on how the user has used it so far at different times of the day, month, year.
There are many machine learning algorithms, the methods people use to build machines that are 'fed' with countless amounts of data. Later, the system takes the 'baton' that the human mind can no longer follow. Information is collected by sensors surrounding us in the real and virtual world. Intelligent machines process these data and look for pattern. On their basis, they continue to learn.
Scientists can teach intelligent machines to identify suspected terrorists. In a Pentagon project, the Maven system was created, which can locate, identify and then track objects in images recorded by satellites or unmanned aerial vehicles.
Scientists, including the researchers from the Faculty of Cybernetics of the Military University of Technology, are looking for possible ways to mislead machines. Knowing them, they will be able to prevent it.
PAP - Science in Poland, Karolina Duszczyk
kol/ agt/ kap/