Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI

In a recent publication in the journal Humanities and Social Sciences Communications, researchers argued that humans often perceive social robots as partners rather than tools, leading to emotional risks. They introduce the concept of a virtual interactive environment (VIE), mirroring the emotions evoked by reading novels.

Study: Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI. Image credit: Stock-Asso/Shutterstock
Study: Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI. Image credit: Stock-Asso/Shutterstock

Background

Social robots, designed to engage with humans socially, have demonstrated the capacity to elicit human emotions. Their applications range from therapeutic companions such as the robot seal to service robots in healthcare and public service. Humans innately project emotions onto inanimate objects, a phenomenon known as anthropomorphism. It involves attributing human attributes, mental states, and emotions to non-human entities.

Experiments by Riether et al. suggest that these robots enhance human task performance. Whether through human-like forms or animal-like traits, these robots fulfil emotional needs in social interactions. However, the anthropomorphism of these robots has raised concerns about deception, disappointment, and reverse manipulation, posing moral and emotional risks.

Anthropomorphism in social robots

The process of anthropomorphism in social robots, where they are perceived as partners or even group members, involves the use of suggestive language and the creation of robot designs that resemble humans or animals. Researchers often use anthropomorphic language, such as describing a robot as "smiling" or "frowning," even in robotics research, leading people to project human-like qualities onto these machines, raising their expectations.

Existing social relationships influence human interaction with social robots, leading people to evaluate these robots as if they were real individuals. Users often perceive social robots as partners, driven by media equation theory. This expectation sometimes conflicts with the reality of limited robot capabilities, leading to emotional dependence.

Rodogno argues that interacting with social robots resembles engaging with narratives, and the emotions generated are a result of imagination. However, deriving emotional satisfaction from active deception raises ethical concerns. There is a need to make the public aware of anthropomorphism and the risks of active deception in human-social robot interactions. It is crucial to differentiate between interpersonal interactions and interactions with social robots, considering the limitations and ethical implications of the latter.

Several approaches attempt to address the risks associated with human interactions with social robots. However, each of the approaches presents its own set of challenges.

The VIE framework

To improve the understanding and portrayal of the relationship between humans and social robots, an exploration of the VIE is essential. Such an account can alleviate the challenges related to social robot anthropomorphism and diversify the perception of social robots across various scenarios. The virtual environment of social robots revolves around virtual interactions, primarily manifesting as emotional connections crafted through human-social robot interactions.

Three distinctive features emerge in the interaction between humans and social robots within the virtual environment. Firstly, this interaction is framed from a human perspective, treating social robots as entities capable of participating in virtual interactions. Second, it reflects the distance between the virtual nature of human-social robot interaction and the instrumental nature of the social robot itself. Unlike traditional tools that serve predefined functions, social robots engage in virtual interactions driven by the need to fulfil human emotional companionship. This immersive experience leads users to immerse themselves in this virtual environment unconsciously. Lastly, understanding virtual interaction evolves through three progressive stages, shaping individuals' perceptions of the relationship between humans and social robots.

To address the challenges posed by anthropomorphism and promote a more nuanced understanding of human-social robot interactions, we introduce the concept of VIE Indication (VIEI). VIEI is the process of clearly identifying and declaring the virtual nature of human-social robot interaction during the deployment and use of social robot products.

This procedure guarantees that human participants are adequately informed and conscious that they are participating in a virtual interaction with social robots. Consequently, it encourages them to exercise caution regarding potential deception risks arising from anthropomorphism.

VIEI offers several advantages. It redefines the responsibility of social robot producers, emphasizing the protection of emotional rights for vulnerable groups such as children. It distinguishes between the responsibilities of social robots and animals, highlighting the producer's role in managing potential emotional risks. VIEI aligns with the principles of corporate social responsibility, holding manufacturers accountable for the societal impact of their products. It also reallocates responsibility between manufacturers and users based on their respective impacts on outcomes.

Moreover, VIEI helps circumvent disappointment in social robots by setting realistic expectations. It constructs diverse images of social robots, reducing the likelihood of users developing unrealistic expectations and feelings of disappointment or deception. It fosters a rational understanding of social robots and promotes their better adaptation to societal mechanisms. Finally, it contributes to a more comprehensive portrayal of robot ethics by emphasizing the role of interaction in shaping ethical practices.

Conclusion

In summary, researchers explored emotional risks linked to social robot anthropomorphization. They highlighted the limitations of existing coping methods and introduced the concept of VIE in the context of human-social robot interactions. To address concerns of active deception, VIEI is proposed as a clear interpretive clarification process for social robot producers during design and deployment. Future research should further clarify the application and regulation of this concept, aiding in better understanding human-social robot interaction processes.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, October 09). Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI. AZoAi. Retrieved on October 18, 2024 from https://www.azoai.com/news/20231009/Mitigating-Emotional-Risks-in-Human-Social-Robot-Interactions-with-VIEI.aspx.

  • MLA

    Lonka, Sampath. "Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI". AZoAi. 18 October 2024. <https://www.azoai.com/news/20231009/Mitigating-Emotional-Risks-in-Human-Social-Robot-Interactions-with-VIEI.aspx>.

  • Chicago

    Lonka, Sampath. "Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI". AZoAi. https://www.azoai.com/news/20231009/Mitigating-Emotional-Risks-in-Human-Social-Robot-Interactions-with-VIEI.aspx. (accessed October 18, 2024).

  • Harvard

    Lonka, Sampath. 2023. Mitigating Emotional Risks in Human-Social Robot Interactions with VIEI. AZoAi, viewed 18 October 2024, https://www.azoai.com/news/20231009/Mitigating-Emotional-Risks-in-Human-Social-Robot-Interactions-with-VIEI.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New Framework Boosts Trustworthiness of AI Retrieval-Augmented Systems