The Dynamics of User Trust in Artificial Intelligence–Based Information Systems for Organizational Decision Making
DOI:
https://doi.org/10.55927/fjst.v5i1.384Keywords:
Artificial Intelligence, User Trust, Information Systems, Organizational Decision Making, Service OrganizationsAbstract
The increasing adoption of artificial intelligence (AI)–based information systems in service sector organizations in Indonesia raises challenges related to user trust in organizational decision-making. This study analyzes the factors influencing user trust in AI-based information systems and examines the role of trust in enhancing decision-making quality and users’ intention to rely on AI. Using a mixed-methods explanatory sequential design, quantitative data were collected through a survey of 60 AI system users in service organizations, complemented by in-depth interviews with six key informants. Quantitative analysis employed linear regression and mediation analysis, while qualitative data were thematically analyzed. The results indicate that AI transparency, system explainability, and perceived reliability positively affect user trust, whereas perceived risk has a negative effect. User trust significantly improves decision-making quality and intention to rely on AI and mediates the relationship between AI system characteristics and decision-making outcomes. This study contributes to the literature on trust in AI-based information systems and offers practical insights for service organizations in designing and managing AI implementations that foster user trust.
References
Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2021). Explainable agents and robots: Results from a systematic literature review. Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, 1078–1088.
Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2021). Beyond accuracy: The role of mental models in human–AI team performance. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15145–15152.
Bansal, G., Wu, T., Zhou, J., Fok, R., & Weld, D. S. (2021). Does the whole exceed its parts? The effect of AI explanations on complementary team performance. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–23.
Braun, V., & Clarke, V. (2021). Thematic analysis: A practical guide. SAGE Publications.
Buçinca, Z., Lin, P., Gajos, K. Z., & Glassman, E. L. (2021). Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. Proceedings of the 26th International Conference on Intelligent User Interfaces, 454–464.
Creswell, J. W., & Plano Clark, V. L. (2021). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
Creswell, J. W., & Poth, C. N. (2020). Qualitative inquiry and research design: Choosing among five approaches (4th ed.). SAGE Publications.
Dwivedi, Y. K., Hughes, D. L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994.
Faraj, S., Pachidi, S., & Sayegh, K. (2021). Working and organizing in the age of the learning algorithm. Information and Organization, 31(1), 100309.
Field, A. (2020). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660.
Grote, G., & Berens, C. (2022). On the ethics of algorithmic decision-making in organizations. Journal of Business Ethics, 170(4), 701–717.
Guest, G., Namey, E., & Mitchell, M. (2020). Collecting and analyzing qualitative data at scale. Annual Review of Public Health, 41, 367–384.
Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis (3rd ed.). Guilford Press.
Kim, T., Choi, J., & Yoon, H. (2022). Trust in artificial intelligence: The moderating role of perceived risk. Information & Management, 59(6), 103681.
Langer, M., König, C. J., & Fitili, A. (2021). Trust in artificial intelligence: A meta-analytic investigation. Journal of Management Studies, 58(5), 1200–1230.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2021). Resistance to medical artificial intelligence. Journal of Consumer Research, 48(4), 629–650.
Molina-Azorín, J. F. (2020). Mixed methods research: An opportunity to improve our studies and our research skills. European Journal of Management and Business Economics, 29(3), 321–336.
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2020). Recommendations for creating better concept definitions in the organizational, behavioral, and social sciences. Organizational Research Methods, 23(2), 159–203.
Rai, A. (2020). Explainable artificial intelligence: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210.
Shin, D. (2021). The effects of explainability and causability on trust in artificial intelligence. Telematics and Informatics, 59, 101551.
Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G. (2021). Organizational decision-making structures in the age of artificial intelligence. California Management Review, 63(3), 66–83.
Siau, K., & Wang, W. (2020). Building trust in artificial intelligence, machine learning, and robotics. In Cutting-edge technologies for information systems research (pp. 129–140). Springer.
Söllner, M., Hoffmann, A., & Leimeister, J. M. (2022). Trust in technology: A systematic review and future research agenda. Journal of the Association for Information Systems, 23(2), 307–346.
Stahl, B. C., & Eke, D. (2022). The ethics of artificial intelligence: A systematic review. AI and Ethics, 2(2), 273–295.
Tracy, S. J. (2020). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact (2nd ed.). Wiley-Blackwell.
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2020). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems, 21(3), 1–27.
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2022). Consumer acceptance and use of information technology: Extending the unified theory. MIS Quarterly, 46(1), 351–380.
Vrontis, D., Christofi, M., Pereira, V., Tarba, S., Makrides, A., & Trichina, E. (2022). Artificial intelligence, robotics, advanced technologies and human resource management: A systematic review. Employee Relations, 44(4), 761–785.
Zhang, Y., Li, X., & Wang, Y. (2023). Perceived reliability and user trust in AI-enabled decision systems. Decision Support Systems, 168, 113847.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Indra Indra, Muh Fuad Mansyur, Adi Heri

This work is licensed under a Creative Commons Attribution 4.0 International License.






























