AI, Culture, and Trust: A Global Look at User Confidence in Virtual Assistants
by Hampo, JohnPaul A.C, Mega Ohis Grace, Onovughe Anthonia Okeme, Umukoro Gift
Published: December 23, 2025 • DOI: 10.51584/IJRIAS.2025.101100116
Abstract
Virtual assistants (VAs) that are driven and powered by AI such as Siri, Alexa, and Google Assistant are increasingly embedded in everyday life. Their adoption is critically a correlation of user trust, which is influenced not only by system performance but also by cultural context. This paper investigates the dynamics of trust in VAs by synthesizing empirical findings from recent studies (n ≈ 1,250 participants across healthcare, consumer, and enterprise domains). We examine four principal antecedents—perceived competence, transparency/explainability, privacy and security, and anthropomorphism—and analyze how cultural dimensions moderate their influence. Findings indicate that competence and privacy consistently drive trust across contexts, but the weight of transparency and anthropomorphism varies by cultural orientation (notably, high uncertainty avoidance cultures demand transparency, while collectivist cultures emphasize social endorsement). We propose a conceptual model linking culture, trust antecedents, and adoption, and conclude with implications for design and governance.