Assessing User Confidence and Acceptance of AI Enhanced Through Multi-LLM Consensus Mechanisms
DOI:
https://doi.org/10.62760/iteecs.4.4.2025.131Keywords:
Large Language Models, LLM Consensus, AI Acceptance, Multi-LLM Frameworks, Systematic Literature ReviewAbstract
Large Language Models (LLMs) have transformed AI usage in many aspects, though the issue of user trust and acceptance is a crucial step to successful usage. This paper is a literature review of 16 recent publications (published in 2021-25) about the impact of LLM consensus mechanisms on user trust and the determinants that affect the acceptance of AI systems. The review outlines these structured frameworks as: LLMs-as-Judges, Mixture-of-Agents and Big Loop/Atomization as effective methods to improve the reliability, consistency, transparency, and interpretability of AI outputs. These consensus mechanisms minimize errors, reduce variability and enhance robustness, hence directly enhancing confidence in AI systems by users. Moreover, the analysis describes the most important considerations for user acceptance, such as explainability, transparency, fairness, reduction of bias, and integration into real-life practices. The interplay of these factors and their influence on the adoption and successful utilization of consensus-driven AI can be demonstrated by the applications in healthcare, education, and smart grid systems. Taken together, the results demonstrate the necessity to develop AI systems that are not only technically sound but also in line with the expectations of users. This work will be useful to researchers and practitioners who want to create reliable, user-friendly AI systems and indicate where future research can be done to maximize multi-LLM consensus mechanisms and adapt them to domain-specific situations.
References
D. H. Poyatos, C. P. González, C. Zuheros, A. H. Poyatos, V. Tejedor, F. Herrera, R. Montes “An overview of model uncertainty and variability in LLM-based sentiment analysis. Challenges, mitigation strategies and the role of explainability”, arXiv.org, 2025. https://arxiv.org/abs/2504.04462
H. Li, Q. Dong, J. Chen, H. Su, Y. Zhou, Q. Ai, Z. Ye, Y. Liu “LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods”, arXiv.org, 2024. https://arxiv.org/abs/2412.05579
M. M. Karim, S. Khan, D. H. Van, X. Liu, C. Wang and Q. Qu “Transforming Data Annotation with AI Agents: A Review of Architectures, Reasoning, Applications, and Impact,” Future Internet, Vol. 17, No. 8, art. no. 353, 2025. https://doi.org/10.3390/fi17080353
A. P. D. Mortanges, H. Luo, S. Z. Shu, A. Kamath, Y. Suter, M. Shelan, A. Pöllinger & M. Reyes “Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging”, npj Digital Medicine, Vol. 7, art. no. 195, 2024. https://doi.org/10.1038/s41746-024-01190-w
S. Sicari, J. F. Cevallos, A. Rizzardi, and A. Coen-Porisini, “Open-Ethical AI: Advancements in Open Source Human Centric Neural Language Models”, ACM Computing Surveys, Vol. 57, No. 4, pp. 1-47, 2024. https://doi.org/10.1145/3703454
D. Nawara and R. Kashef “A Comprehensive Survey on LLM-Powered Recommender Systems: From Discriminative, Generative to Multi-Modal Paradigms”, IEEE Access, Vol. 13, pp. 145772–145798, 2025. https://doi.org/10.1109/access.2025.3599832
A. Triantafyllidis, S. Segkouli, S. Kokkas, A. Alexiadis, E. Lithoxoidou, G. Manias, A. Antoniades, K. Votis, D. Tzovaras “Large Language Models for Cardiovascular Disease, Cancer, and Mental Disorders: A Review of Systematic Reviews”, Preprints.org, 2025. https://doi.org/10.20944/preprints202510.2480.v1
N. S. Agarwal and S. S. Kumar “A Review on Large Language Models for Visual Analytics”, arXiv.org, 2025. https://arxiv.org/abs/2503.15176
Z. Hu, Y. Huang, J. Feng, and C. Deng, “Big Loop and Atomization: A Holistic Review on the Expansion Capabilities of Large Language Models”, Applied Sciences, Vol. 15, No. 17, art. no. 9466, 2025. https://doi.org/10.3390/app15179466
K. Hong and Y. Park “Large Language Models for Semantic Join: A Comprehensive Survey”, IEEE Access, Vol. 13, pp. 184478-184493, 2025. https://doi.org/10.1109/access.2025.3625753
Y. He, K. Xu, S. Cao, Y. Shi, Q. Chen, and N. Cao, “Leveraging Foundation Models for Crafting Narrative Visualization: A Survey,” IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 10, pp. 9303–9323, 2025. https://doi.org/10.1109/tvcg.2025.3542504
H. Albasry, E. Carmona-Cejudo, A. Rauf, and D. Chen “A systematically derived AI-based framework for student-centered learning in higher education”, Social Sciences & Humanities Open, Vol. 12, art. no. 102085, 2025. https://doi.org/10.1016/j.ssaho.2025.102085
D. A. Scherbakov, N. C. Hubig, L. A. Lenert, A. V. Alekseyenko, and J. S. Obeid “Natural Language Processing and Social Determinants of Health in Mental Health Research: An Artificial Intelligence Assisted Scoping Review”, JMIR Mental Health, Vol. 12, 2024. https://doi.org/10.2196/67192
Y. Lu, A. Aleta, C. Du, L. Shi, and Y. Moreno “LLMs and Generative Agent-Based Models for Complex Systems Research”, Physics of Life Reviews, Vol. 51, pp. 283-293, 2024. https://doi.org/10.1016/j.plrev.2024.10.013
S. S. Bavirthi, D. P. Sreya, and T. Poojitha “Comparative analysis of Mixture-of-Agents models for natural language inference with ANLI data”, Natural Language Processing Journal, Vol. 11, art. no. 100140, 2025. https://doi.org/10.1016/j.nlp.2025.100140
Y. M. Banad, S. S. Sharif, and Z. Rezaei, “Artificial intelligence and machine learning for smart grids: from foundational paradigms to emerging technologies with digital twin and large language model driven intelligence”, Energy Conversion and Management: X, Vol. 28, art. no. 101329, 2025. https://doi.org/10.1016/j.ecmx.2025.101329
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Lohit Sai Andra, Sandeep Kumar Srivastava, Shruthi Krishna, Phaneendra Siddana, Karri Sairamakrishna BuchiReddy

This work is licensed under a Creative Commons Attribution 4.0 International License.
This Journal and its metadata are licenced under a