Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks

In an article recently published in the journal Scientific Reports, researchers demonstrated the effectiveness of stingy bots in improving human welfare in experimental sharing networks.

Study: Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks. Image credit: Generated using DALL.E.3
Study: Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks. Image credit: Generated using DALL.E.3

Background

Artificial intelligence (AI)-powered machines are increasingly being incorporated into different social and economic interactions in human groups, including resource sharing. For instance, robots share roles and tasks with human teammates, chatbots share time to coordinate and communicate with humans, and autonomous vehicles share roads with human drivers.

Human-like social and moral preferences in AI and machines are crucial for improving human welfare in these hybrid systems. However, the kind of individual behavior that provides collective benefits depends on the network dynamics and structure where individuals interact with each other.

People develop power inequality and social relations through social exchange with limited resources, such as explicit commitments, time, space, and food. Machines that reward their immediate partners, ignoring the specific network mechanism of social exchange/resource sharing cannot reduce power disparities within a human group or benefit human welfare.

The study

In this study, researchers performed an online/virtual lab experiment that involved simple networks of humans playing an economic resource-sharing game to which the researchers sometimes added artificial agents/bots. This experiment investigated two opposite machine allocation behavior policies, including reciprocal bots and stingy bots. Reciprocal bots reciprocally share all resources, while stingy bots do not share any resources.

The impact of both bot policies on collective welfare in human groups was evaluated in four dimensions, including satisfaction inequality, satisfaction, wealth inequality, and wealth. Researchers hypothesized based on network exchange theory that stingy bots can improve and maintain collective welfare in human participants by enabling reciprocity between them when these bots are located at a specific network position.

Four hypotheses, designated as H1, H2, H3, and H4, were evaluated in the experiment. The H1 hypothesis was that a stingy bot located at the position of a central node would facilitate reciprocal exchanges between individuals and improve their group-wide satisfaction, while the H2 hypothesis was that a stingy bot located at the position of a semi-peripheral node would hinder people’s reciprocal exchanges and reduce their group-wide satisfaction.

The H3 hypothesis was that a stingy bot located at the position of a peripheral node would affect reciprocal exchanges between individuals without impacting their group-wide satisfaction, while the H4 hypothesis was that no substantial improvements can be achieved in group-wide satisfaction and sharing dynamics by a reciprocal bot irrespective of the geodesic position of the bot.

Researchers randomly assigned 496 human participants to one of the seven conditions, including one control condition that did not involve any bot and six treatment combinations of the network location and allocation policy of bots, in a series of 120 sessions/15-20 sessions for each condition.

Participants were randomly assigned to a node’s position of the five-node path network and played a resource-sharing game for 10 rounds. Researchers noted the satisfaction of participants related to the game after the 10-round game play and they answered using a 7-level rating system.

Study findings

The experiment results fully supported the pre-registered H1, H2, and H3 hypotheses and partially supported the H4 hypothesis about machine allocation behavior for human welfare. Reciprocal bots made little changes in unequal resource distribution among people/were less effective as their policy was similar to most of the human participants.

However, the results showed that reciprocal bots improve collective welfare only when they were located in a semi-periphery node’s position. This exception to the H4 hypothesis occurred as reciprocal bots treated local partners more fairly compared to humans.

Stingy bots caused a significant shift in satisfaction and wealth of human groups both negatively and positively as their behavior differed from typical humans/they were insensitive to reciprocity norms, unlike humans. These bots improved people's satisfaction by enabling reciprocal transactions between them when they were placed at the central node’s position. The central stingy bots reduced the average satisfaction of players at the semi-peripheral node’s positions and significantly increased the average satisfaction of players at the peripheral node’s positions, leading to improved group-wide satisfaction.

This finding was evidence against a dyadic economic view of AI-human interaction. Specifically, people consider both economic outcomes and the social processes leading to the outcomes while assessing subjective welfare. Thus, stingy bots can act as social catalysts for people when placed at a specific network position to balance structural power and improve collective welfare in human groups without bestowing any wealth on people.

However, the introduction of stingy bots is also significantly risky as these bots can lead to adverse consequences when they are misplaced, which indicates the need to differently design the machine behavior from human behavior to break up structural barriers.

To summarize, the study's findings emphasized the need to incorporate the human nature of reciprocity and relational interdependence while designing machine behavior in sharing networks.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, October 25). Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks. AZoAi. Retrieved on July 06, 2024 from https://www.azoai.com/news/20231025/Enhancing-Human-Welfare-Through-AI-Enabled-Stingy-Bots-in-Sharing-Networks.aspx.

  • MLA

    Dam, Samudrapom. "Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks". AZoAi. 06 July 2024. <https://www.azoai.com/news/20231025/Enhancing-Human-Welfare-Through-AI-Enabled-Stingy-Bots-in-Sharing-Networks.aspx>.

  • Chicago

    Dam, Samudrapom. "Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks". AZoAi. https://www.azoai.com/news/20231025/Enhancing-Human-Welfare-Through-AI-Enabled-Stingy-Bots-in-Sharing-Networks.aspx. (accessed July 06, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Enhancing Human Welfare Through AI-Enabled Stingy Bots in Sharing Networks. AZoAi, viewed 06 July 2024, https://www.azoai.com/news/20231025/Enhancing-Human-Welfare-Through-AI-Enabled-Stingy-Bots-in-Sharing-Networks.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Oregon State University's Groundbreaking AI Chip Slashes Energy Use, Mimics Brain Function