Endurance of Artificial Intelligence Systems Cyber Security: Analysis of Vulnerabilities, Attacks and Countermeasures

Endurance of Artificial Intelligence Systems Cyber Security: Analysis of Vulnerabilities, Attacks and Countermeasures

The wider the field of use of AI becomes, the more cybercriminals are interested in this direction. Modern artificial intelligence systems (hereinafter – AIS) are based on methods that are vulnerable to destructive attacks, which are very dangerous for their operation. Thanks to this, attackers can gain control over AIS and quite freely manipulate them to change behaviour and ultimately to directly impact on user security. Therefore, the provision of cyber protection of AIS is quite relevant and an important direction of research and development.

There is a certain set of attack vectors on AIS. To understand the situation in this direction, it is necessary classify these attacks and examine the main ones in detail. Based on the classification, it is necessary to analyse the attacks according to the level of danger to the system as a whole. Next, based on the results analysis, you need to determine the attacks that can cause the most damage and the level of countermeasures from which is insufficient. The critical directions identified with the help of such an analysis will become the basis for further research on this topic.

Based on the analysis and regarding the cyber security of the AIS, it was found that they are vulnerable to both classic attacks on software, as well as for specific attack vectors inherent only these complex systems.

It was determined that the specific vectors of attacks on the AIS consist of three main groups: “Attacks on the platform”, “Attacks on the algorithm” and “Attacks on the data”. “Attacks on the platform” are essentially very close to classic software attacks. Modification of data, denial of service, inbound leaks are areas already familiar to information security specialists. There are many methods to combat these types of attacks. Therefore, taking into account the fact that the severity of their consequences is ours according to estimates, at a level below the average, this direction will not be a priority in the future either research.

 

Sources: 

Neretin O., Kharchenko V., 2023, Information Systems and Networks, Issue 12, 2022 https://science.lpnu.ua/sites/default/files/journal-paper/2023/jan/29738/221029maket-9-24.pdf

Herping S. (2019). Securing Artificial Intelligence – Part I. https://www.stiftung-nv.de/sites/default/files/securing_artificial_intelligence.pdf

Povolny S. (2020). Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles. McAfee Labs. https://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomousvehicles/

Catak F. O., Yayilgan S. Y. (2021). Deep Neural Network based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks. In International Conference on Intelligent Technologies and Applications, 280–291. https://doi.org/10.1007/978-3-030-71711-7_23

Autor(s): Nadiia Serhienko, Kharkiv National University of Internal Affairs