In an article recently published in the journal Scientific Reports, researchers demonstrated the effectiveness of stingy bots in improving human welfare in experimental sharing networks.
Background
Artificial intelligence (AI)-powered machines are increasingly being incorporated into different social and economic interactions in human groups, including resource sharing. For instance, robots share roles and tasks with human teammates, chatbots share time to coordinate and communicate with humans, and autonomous vehicles share roads with human drivers.
Human-like social and moral preferences in AI and machines are crucial for improving human welfare in these hybrid systems. However, the kind of individual behavior that provides collective benefits depends on the network dynamics and structure where individuals interact with each other.
People develop power inequality and social relations through social exchange with limited resources, such as explicit commitments, time, space, and food. Machines that reward their immediate partners, ignoring the specific network mechanism of social exchange/resource sharing cannot reduce power disparities within a human group or benefit human welfare.
The study
In this study, researchers performed an online/virtual lab experiment that involved simple networks of humans playing an economic resource-sharing game to which the researchers sometimes added artificial agents/bots. This experiment investigated two opposite machine allocation behavior policies, including reciprocal bots and stingy bots. Reciprocal bots reciprocally share all resources, while stingy bots do not share any resources.
The impact of both bot policies on collective welfare in human groups was evaluated in four dimensions, including satisfaction inequality, satisfaction, wealth inequality, and wealth. Researchers hypothesized based on network exchange theory that stingy bots can improve and maintain collective welfare in human participants by enabling reciprocity between them when these bots are located at a specific network position.
Four hypotheses, designated as H1, H2, H3, and H4, were evaluated in the experiment. The H1 hypothesis was that a stingy bot located at the position of a central node would facilitate reciprocal exchanges between individuals and improve their group-wide satisfaction, while the H2 hypothesis was that a stingy bot located at the position of a semi-peripheral node would hinder people’s reciprocal exchanges and reduce their group-wide satisfaction.
The H3 hypothesis was that a stingy bot located at the position of a peripheral node would affect reciprocal exchanges between individuals without impacting their group-wide satisfaction, while the H4 hypothesis was that no substantial improvements can be achieved in group-wide satisfaction and sharing dynamics by a reciprocal bot irrespective of the geodesic position of the bot.
Researchers randomly assigned 496 human participants to one of the seven conditions, including one control condition that did not involve any bot and six treatment combinations of the network location and allocation policy of bots, in a series of 120 sessions/15-20 sessions for each condition.
Participants were randomly assigned to a node’s position of the five-node path network and played a resource-sharing game for 10 rounds. Researchers noted the satisfaction of participants related to the game after the 10-round game play and they answered using a 7-level rating system.
Study findings
The experiment results fully supported the pre-registered H1, H2, and H3 hypotheses and partially supported the H4 hypothesis about machine allocation behavior for human welfare. Reciprocal bots made little changes in unequal resource distribution among people/were less effective as their policy was similar to most of the human participants.
However, the results showed that reciprocal bots improve collective welfare only when they were located in a semi-periphery node’s position. This exception to the H4 hypothesis occurred as reciprocal bots treated local partners more fairly compared to humans.
Stingy bots caused a significant shift in satisfaction and wealth of human groups both negatively and positively as their behavior differed from typical humans/they were insensitive to reciprocity norms, unlike humans. These bots improved people's satisfaction by enabling reciprocal transactions between them when they were placed at the central node’s position. The central stingy bots reduced the average satisfaction of players at the semi-peripheral node’s positions and significantly increased the average satisfaction of players at the peripheral node’s positions, leading to improved group-wide satisfaction.
This finding was evidence against a dyadic economic view of AI-human interaction. Specifically, people consider both economic outcomes and the social processes leading to the outcomes while assessing subjective welfare. Thus, stingy bots can act as social catalysts for people when placed at a specific network position to balance structural power and improve collective welfare in human groups without bestowing any wealth on people.
However, the introduction of stingy bots is also significantly risky as these bots can lead to adverse consequences when they are misplaced, which indicates the need to differently design the machine behavior from human behavior to break up structural barriers.
To summarize, the study's findings emphasized the need to incorporate the human nature of reciprocity and relational interdependence while designing machine behavior in sharing networks.