In a recent paper published in the journal Scientific Reports, researchers explored how the social context affects trust when cooperating with an algorithm.
Background
Algorithms play a pivotal role in contemporary society across various domains such as entertainment, administration, and healthcare. Trust significantly influences the way individuals engage with algorithms, encompassing artificial intelligence, and is molded not only by objective algorithmic attributes but also by the context within which these interactions take place.
Research on human-algorithm cooperation, particularly in economic and social dilemmas, consistently shows that people tend to be less cooperative and trusting when interacting with algorithms compared to humans. This holds even when the algorithm's behavior adapts to maximize cooperation. The mere knowledge that they are interacting with an algorithm impedes cooperation. Social context and framing also influence cooperation in different ways. The study investigates whether social presence can mitigate the observed distrust when interacting with an algorithm.
Experimental Design and Procedure
The experiment involved 101 participants who volunteered or received course credits for their participation. They were recruited through the second author's social network and Sona Systems, randomly assigned to one of three conditions: Other-Person-as-Opponent (OPO), Algorithm-as-Opponent (AO), and Algorithm-as-Opponent-in-Social-Context (AOSC). A convenience sample was used due to the complexity of the experimental sessions.
The trust game, hosted on Gorilla, involved trust granting and trust honoring in successive rounds. Players decide whether to trust their opponent, and the opponent could honor or abuse that trust. Participants alternated between trustor and trustee roles, and the game included variations in the frequency of cooperative responses by the algorithm.
Participants in the OPO and AOSC conditions engaged in a video call with the experimenter via Microsoft Teams before starting the experiment. In the AO condition, participants were directed to the experiment through a link in the Sona recruitment system, with no video call. In all conditions, participants received instructions and gave informed consent before beginning the trust game.
Data analysis was conducted in R, focusing on cooperative responses (trust honoring and trust granting). General linear mixed models were used, considering the fixed effects of block type (75 percent vs. 25 percent trust honoring) and condition. The game round was included as a numerical predictor, accounting for non-linear trends, and a random intercept was estimated per participant. Individual differences in the effects of block type and game round were also incorporated.
Statistical Analysis
In terms of trust granting, the condition type had a significant main effect, with a higher trust granting probability in the OPO condition compared to the AO condition. The AOSC condition did not significantly differ from the OPO condition or the AO condition. The block type also had a significant main effect, with a higher trust-granting probability in the 75 percent trust-honoring condition compared to the 25 percent trust-honoring condition. Trust-granting probability decreased with the game round.
The interaction between the condition type and the game round was significant. The trust-granting decrease was significantly steeper in the AO condition compared to the OPO condition, but not significantly steeper compared to the AOSC condition. The difference in the decrease between the AOSC and OPO conditions was not significant. There was a greater decline in trust granting in the 25 percent trust-honoring condition as opposed to the 75 percent trust-honoring condition, indicating a substantial interaction between block type and the game round. Block type and condition did not significantly interact, nor did any three-way interactions.
For trust-honoring, the condition type had no significant main effect. The block type had a significant main effect, with a higher trust-honoring probability in the 75 percent trust-honoring condition compared to the 25 percent trust-honoring condition. Trust-honoring probability also decreased with the game round. The interaction between the condition type and the game round was significant. When comparing the AOC to both the OPOC and the AOSC, the decline in trust-honoring was noticeably higher. There was no statistically significant difference in slopes between the OPO and the AOSC conditions. There were no significant interactions between game round and block type, block type, and condition, or three-way interactions.
Additional analyses were conducted to enhance the robustness of the findings, considering various factors and adjusting for power. These analyses confirmed the main findings, including the positive influence of social presence on cooperative responses during the trust game.
In the combined analysis of trust granting and trust honoring, the main effect of condition type remained significant, but the specific difference between the algorithm-as-opponent and other-person-as-opponent conditions was not significant. Simplifying the random structure of the model yielded similar results, with condition type significantly affecting trust granting. Centered treatment contrasts further highlighted the differences between conditions, with the algorithm-as-opponent condition showing a steeper decrease in trust granting and trust honoring compared to other conditions.
Conclusion
In conclusion, researchers revealed that people trust algorithms more when interacting with them in the presence of others. Although this finding was predicted based on social presence theory, future research must delve deeper into the cognitive mechanisms responsible for this effect. Nonetheless, these results highlight a functional relationship between social presence and trust, offering potential insights for future interventions even in the absence of a complete understanding of the underlying cognitive mechanisms.