Building trust when using artificial intelligence
https://doi.org/10.26425/2658-3445-2021-4-2-28-36
Abstract
In the XXI century, “trust” becomes a category that manifests itself in a variety of ways and affects many areas of human activity, including the economy and business. With the development of information and communication technologies and end-to-end technologies, this influence is becoming more and more noticeable. A special place in digital technologies is occupied by human trust when interacting with artificial intelligence and machine learning systems. In this case, trust becomes a potential stumbling block in the field of further development of interaction between artificial intelligence and humans. Trust plays a key role in ensuring recognition in society, continuous progress and development of artificial intelligence.
The article considers human trust in artificial intelligence and machine learning systems from different sides. The main objectives of the research paper are to structure existing research on this subject and identify the most important ways to create trust among potential consumers of artificial intelligence products. The article investigates the attitude to artificial intelligence in different countries, as well as the need for trust among users of artificial intelligence systems and analyses the impact of distrust on business. The authors identified the factors that are crucial in the formation of the initial level of trust and the development of continuous trust in artificial intelligence.
About the Authors
A. A. DashkovRussian Federation
Andrey A. Dashkov, Cand. Sci. (Tech.), Assoc. Prof.
Moscow
Yu. O. Nesterova
Russian Federation
Yulia O. Nesterova, Graduate Student
Moscow
References
1. Ferrario A., Loi M. and Vigano E. (2020), “In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions”, Philosophy & Technology, vol. 33, issue 3, pp. 523–539. https://doi.org/10.1007/s13347-019-00378-3
2. Gillespie N., Lockey S. and Curtis C. (2021), Trust in artificial intelligence: A five country study, The University of Queensland and KPMG, Australia. https://doi.org/10.14264/e34bfa3
3. Glikson E. and Woolley A. (2020), “Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals”, Academy of Management Annals, vol. 14, no. 2, pp. 627–660. https://doi.org/10.5465/annals.2018.0057
4. Potapova E.G. and Shklyaruk M.S. [eds]. (2021), Ethics and “digit”: from problems to solutions, The Russian Presidential Academy of National Economy and Public Administration Publishing House, Moscow, Russia. (In Russian).
5. Ryan M. (2020), “In AI we trust: Ethics, artificial intelligence, and reliability”, Science and Engineering Ethics, vol. 26, issue 5, pp. 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
6. Siau K. and Wang W. (2018), “Building trust in artificial intelligence, machine learning, and robotics”, Cutter Business Technology Journal, vol. 31, no. 2, pp. 47–53.
Review
For citations:
Dashkov A.A., Nesterova Yu.O. Building trust when using artificial intelligence. E-Management. 2021;4(2):28-36. (In Russ.) https://doi.org/10.26425/2658-3445-2021-4-2-28-36