Can You Trust a Chatbot?

Personalization boosts how human-like and engaging AI chatbots feel, but integrity and competence remain the true cornerstones of trust.

Research: When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbotResearch: When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot

"Hello, ChatGPT. Can you help me?" "Of course, how can I help you? :-)" Exchanges between users and chatbots, based on artificial intelligence (AI), quickly seem like conversations with another person.

Dr. Fanny Lalot and Anna-Marie Betram from the Faculty of Psychology at the University of Basel wanted to know how much people trust AI chatbots and what factors contribute to this trust. They focused on text-based systems, such as ChatGPT, rather than voice assistants, such as Siri or Alexa.

Test subjects were exposed to examples of interactions between users and a chatbot called Conversea, which was imagined specifically for the study. They then imagined they would interact with Conversea themselves. The study utilized a "design fiction" approach, allowing researchers to simulate interactions under controlled conditions to better analyze trust dynamics. The results are published in the Journal of Experimental Psychology: General.

Trust Propensity and Anthropomorphism

Our level of trust in other people depends on various factors, including our personality, the other person's behavior, and the specific situation. "Impressions from childhood influence how much we are able to trust others, but a certain openness is also needed in order to want to trust," explains social psychologist Fanny Lalot. Characteristics that promote trust include integrity, competence, and benevolence.

The new study shows that what applies to relationships between humans also applies to AI systems. Competence and integrity, in particular, are essential criteria that lead humans to perceive an AI chatbot as reliable. Benevolence, however, is less important as long as the other two dimensions are present. Notably, the study revealed that trust propensity—particularly the general inclination to trust smart technology—plays a significant role in shaping perceptions of AI trustworthiness.

"Our study demonstrates that the participants attribute these characteristics to the AI directly, not just to the company behind it. They do think of AI as if it was an independent entity," according to Lalot.

Additionally, there are differences between personalized and impersonal chatbots. If a chatbot addresses us by name and references previous conversations, for example, the study participants assessed it as especially benevolent and competent.

"They anthropomorphize the personalized chatbot. Anthropomorphism increases perceptions of the chatbot's ability and benevolence, contributing to greater intentions to use the tool and share personal information with it," according to Lalot. However, the test subjects did not attribute significantly more integrity to the personalized chatbot, and overall trust was not substantially higher than in the impersonal chatbot.

Integrity is More Important Than Benevolence

According to the study, integrity is a more important factor in trust than benevolence. For this reason, it is important to develop technology that prioritizes integrity above all else. Designers should also consider that personalized AI is perceived as more benevolent, competent, and human to ensure the proper use of the tools. However, researchers caution that increased anthropomorphism may not directly translate into higher trust, highlighting the need for thoughtful AI design. Other research has demonstrated that lonely, vulnerable people, in particular, risk becoming dependent on AI-based friendship apps.

"Our study makes no statements about whether it is good or bad to trust a chatbot," Lalot emphasizes. She sees the AI chatbot as a tool we must learn to navigate, much like the opportunities and risks of social media.

However, some recommendations can be derived from their results. "We project more onto AI systems than is actually there," says Lalot. This makes it even more important that AI systems be reliable. A chatbot should neither lie to us nor unconditionally endorse everything we say.

Suppose an AI chatbot is too uncritical and agrees with everything a user says. In that case, it fails to provide reality checks and runs the risk of creating an echo chamber that, in the worst case, can isolate people from their social environment. "A [human] friend would hopefully intervene at some point if someone developed ideas that are too crazy or immoral," Lalot says.

Betrayed by AI?

In human relationships, broken trust can have serious consequences for future interactions. Might this also be the case with chatbots? "That is an exciting question. Further research would be needed to answer it," says Dr. Lalot. "I can certainly imagine that someone might feel betrayed if advice from an AI chatbot has negative consequences."

Ethical Accountability

Laws must hold developers accountable. For example, an AI platform could show how it arrives at a conclusion by openly revealing the sources it used, and it could say when it doesn't know something rather than invent an answer. Furthermore, perceived integrity is critical, as it can influence user trust even when actual integrity is absent.

Source:
Journal reference:
  • Lalot, F., & Bertram, A.-M. (2024). When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot. Journal of Experimental Psychology: General. Advance online publication. DOI: 10.1037/xge0001696, https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxge0001696

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.