Insuring Trust: Understanding Policyholders' Acceptance of Chatbots in Insurance

In an article published in the journal Humanities & Social Sciences Communications, researchers from Spain investigated the factors influencing policyholders’ attitudes and behavioral intentions toward conversational bots in the insurance context. They employed a technology acceptance model (TAM) and identified trust as a crucial factor explaining chatbot acceptance among insurance customers.

Study: Insuring Trust: Understanding Policyholders
Study: Insuring Trust: Understanding Policyholders' Acceptance of Chatbots in Insurance. Image credit: TeeStocker/Shutterstock

The research focused on customers' average intention to use and attitude toward chatbots in communications with the company for managing policies. Additionally, the authors examined the drivers of intention to use and attitude toward the assistance of conversational robots in policy management.

Background

Chatbots are automated systems or conversational agents that interact with humans using natural language through voice or text. They are widely used in industries such as finance, retail, and healthcare to provide customer service, information, and advice. Artificial intelligence (AI) algorithms enable them to understand and respond to user queries.

Industry 4.0 is the integration of technologies, such as AI, machine learning, blockchain, and the Internet of Things, into various sectors of the economy. These technologies enable the creation of new products and services, the improvement of internal processes, and the enhancement of customer experience. The insurance industry is one of the sectors that has been impacted by Industry 4.0, as evidenced by the growth of Insurtech, which is the application of technological innovations to insurance services.

One of these innovations is chatbots which can offer several benefits for the insurance sector, such as reducing costs, increasing efficiency, and providing 24/7 service. However, they also pose some challenges, such as low customer acceptance, trust issues, the complexity of insurance procedures, ethical concerns, and the potential for errors and fraud.

About the Research

In the present paper, the authors explored the acceptance of chatbots by policyholders. They focused on using chatbots to communicate with the insurer to manage existing policies, such as reporting claims or modifying coverages. Moreover, they proposed a TAM model, which explains the adoption of new technologies.

The model includes three main constructs: perceived usefulness, the degree to which a user believes that technology will improve his or her performance; perceived ease of use, the degree to which a user believes that technology will require minimal effort; and trust, the degree to which a user believes that technology is reliable, secure, and beneficial. It hypothesized that these constructs influence the attitude of users toward chatbots, which in turn influences the user’s behavioral intention to use them.

The study conducted a survey with 226 policyholders to test the model. These policyholders had more than two insurance policies. The survey asked the respondents to rate their agreement with various statements related to the constructs of the model using an 11-point Likert scale. The data were analyzed using partial least squares structural equation modeling (PLS-SEM), which is a statistical technique that allows testing the relationships between latent variables.

Research Findings

The outcomes showed that the proposed model had a good fit and explained a large proportion of the variance in attitude and behavioral intention. It supported most of the hypotheses of the model except for the direct effect of perceived ease of use on perceived usefulness. The study highlighted that trust was the most influential factor in explaining the acceptance of chatbots, followed by perceived ease of use and perceived usefulness.

Trust had a positive effect on attitude, perceived usefulness, and perceived ease of use, suggesting that users who trust chatbots are more likely to perceive them as useful and easy to use and thus have a more favorable attitude toward them. Perceived usefulness and perceived ease of use also had a positive effect on attitude, indicating that users who perceive chatbots as beneficial and effortless are more likely to have a positive attitude toward them. Finally, attitude had a positive effect on behavioral intention, implying that users who have a positive attitude toward chatbots are more likely to use them for their insurance needs.

The authors recommended the following for insurance companies or service providers:

  • Designing and implementing chatbots to enhance customer trust. This involves ensuring the reliability, security, and transparency of chatbots, along with providing human support when needed.
  • Developing and promoting chatbots that demonstrate value and convenience for customers. This includes providing chatbots capable of handling complex and diverse insurance procedures, offering personalized and accurate information, and featuring a user-friendly and intuitive interface.
  • Fostering a positive attitude toward chatbots among customers. This involves educating and informing customers about the benefits and features of chatbots, as well as addressing their concerns and expectations.

Conclusion

Overall, the review comprehensively examined the chatbot acceptance by policyholders in the insurance industry. The paper proposed and tested the TAM method, including trust, perceived usefulness, and perceived ease of use as the factors influencing attitude and behavioral intention toward chatbots. It showed that trust is the most impactful factor. The authors also found that attitude is a significant mediator of the effect of these factors on behavioral intention.

The researchers acknowledged limitations and challenges such as the lack of control variables, including age, gender, education, or income, and the use of a self-reported questionnaire, which may introduce biases. Despite the limitations, they provided valuable insights for the insurance sectors and chatbot developers.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, January 23). Insuring Trust: Understanding Policyholders' Acceptance of Chatbots in Insurance. AZoAi. Retrieved on November 22, 2024 from https://www.azoai.com/news/20240123/Insuring-Trust-Understanding-Policyholders-Acceptance-of-Chatbots-in-Insurance.aspx.

  • MLA

    Osama, Muhammad. "Insuring Trust: Understanding Policyholders' Acceptance of Chatbots in Insurance". AZoAi. 22 November 2024. <https://www.azoai.com/news/20240123/Insuring-Trust-Understanding-Policyholders-Acceptance-of-Chatbots-in-Insurance.aspx>.

  • Chicago

    Osama, Muhammad. "Insuring Trust: Understanding Policyholders' Acceptance of Chatbots in Insurance". AZoAi. https://www.azoai.com/news/20240123/Insuring-Trust-Understanding-Policyholders-Acceptance-of-Chatbots-in-Insurance.aspx. (accessed November 22, 2024).

  • Harvard

    Osama, Muhammad. 2024. Insuring Trust: Understanding Policyholders' Acceptance of Chatbots in Insurance. AZoAi, viewed 22 November 2024, https://www.azoai.com/news/20240123/Insuring-Trust-Understanding-Policyholders-Acceptance-of-Chatbots-in-Insurance.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Boost Machine Learning Trust With HEX's Human-in-the-Loop Explainability