Threats and risks of the use of Artificial Intelligence

Threats and risks of the use of Artificial Intelligence

At the current time, among the widespread systems that are available to the public and contribute to the formation of public opinion about the possibilities of artificial intelligence, we can see the following generative-type Artificial intelligence systems (AIS): ChatGPT — a chatbot created by the OpenAI company, supports a dialogue with the user using natural languages and generates texts on a given topic and Midjourney (intermediate path) — a service from the company of the same name, which, based on text descriptions — requests for the desired image generates it.

Analysing the advantages of using Artificial Intelligence (AI) in various fields and the risks of impact on the performance of information security and cyber security tasks, as integral components of national security. It was determined that the development of AI has become a key priority for many countries, and at the same time, questions have arisen regarding the safety of this technology and the consequences of its use.

The explosive nature of the development of artificial intelligence technology and its application, despite the positive properties of the technology, carries great potential risks.

The most important stage in the system of managing risks that arise because of the use of artificial intelligence systems is the assessment of the landscape of possible risks and their identification. This is an iterative process of finding new types of risks and profiling their main characteristics for further interpretation, analysis and processing. The task of identifying risks is solved as a task of finding anomalies in arrays of activity data related to the field of risk management application. Anomalous observations in such data can be explained by the presence of relationships and interactions between objects and subjects of activity, which lead to the emergence of yet to be identified risk situations and corresponding consequences, or are potential sources of the occurrence of such situations in the future.

Sources: 

  1. Skitsko O., Skladanni P., Shyrshov R., Humeniuk M., & Vorokhob M. (2023). Cyber security: education, science, technology No 2(22), Borys Grinchenko Kyiv Metropolitan University 2023 https://csecurity.kubg.edu.ua/index.php/journal/article/view/520/408
  2. Pause Giant AI Experiments: An Open Letter – Future of Life Institute. (2023). Future of Life Institute.https://futureoflife.org/wp-content/uploads/2023/05/FLI_Pause-Giant-AI-Experiments_An-Open-Letter.pdf.
  3. Moskalenko, V.; Kharchenko, V.; Moskalenko A., &; Kuzikov, B. (2023). Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and Methods. Algorithms, 16(3) 165. https://doi.org/10.3390/a16030165
  4. The Artificial Intelligence Index 2023 Annual Report: AI Index Steering Committee (2023). Institute for Human-Centered AI, Stanford University.
  5. Zhu, Y. (2023). Online data poisoning attack against edge AI paradigm for IoT-enabled smart city. Mathematical Biosciences and Engineering. 20(10), 17726–17746. https://doi.org/10.3934/mbe.2023788

 

Autor(s): Nadiia Serhienko, Kharkiv National University of Internal Affairs