How Is Generative AI Redefining Trust?

Uncover the critical shifts in trust dynamics as AI chatbots challenge traditional beliefs, and learn why developers hold the key to future digital trust.

Generative AI and Its Implications for Definitions of TrustGenerative AI and Its Implications for Definitions of Trust

In an article published in the journal Information, researchers critically examined how generative artificial intelligence (AI) (GenAI) chatbots challenged traditional assumptions of trust.

They identified four key assumptions—default trust levels, identifying human versus AI agents, quantifying trust, and the absence of expected gains in trust relationships—that may no longer apply to AI chatbots. The authors proposed adjustments to the definition and object-oriented (OO) model of trust to reflect better the complexities introduced by generative AI, emphasizing the expanded role of developers and the influence of training data.

Background

The topic of trust in AI gained attention with Microsoft's Tay, a chatbot that was shut down for generating inappropriate content. Previous research highlighted the unpredictability of learning software (LS) and emphasized the developer's ethical responsibility when software interacts with users.

However, since Tay's incident, GenAI has evolved significantly, raising new concerns about autonomy, control, and accountability in AI systems. Misinformation and deepfakes have further eroded trust in AI, but these concerns echo earlier issues like transparency and responsibility. This paper addressed gaps in prior work by reassessing how GenAI chatbots impact traditional models of trust and proposing improvements to accommodate these challenges.

Trust Dynamics in AI Agents

The object-oriented (OO) model of trust was applied to understanding trust in artificial agents (AAs), which were autonomous entities that interacted with their environment and adapted accordingly. AAs, including GenAI chatbots, represent a shift in trust dynamics between humans and machines.

Trust, as defined in the model, is a relationship between a trustor (A) and a trustee (B), where both can be human or artificial. The trustor delegated a task to the trustee, and trust involved risk and expectation of gain. The paper highlights that for AAs, risk, and trust were often explicitly quantified, unlike in humans.

The model distinguished two primary contexts for trust, namely, face-to-face (f2f) trust, requiring physical presence, and electronic trust (e-trust), which occurs via digital interaction. Additionally, four classes of trust relationships were identified, which were, human-to-human, human-to-AA, AA-to-human, and AA-to-AA.

Generative AI chatbots (GenAICs) significantly impacted these trust dynamics, especially in areas like education, business, and social media. In education, for instance, GenAICs fundamentally alter the traditional student-teacher trust dynamic by integrating AAs into the learning process, raising new challenges in assessing authenticity. Similarly, AI-driven deepfakes have blurred the line between reality and deception in business and politics, making trust harder to establish. Thus, the OO model of trust requires adaptation to these evolving contexts.

Re-Examining Trust and Its Underlying Assumptions

The researchers revisited the assumptions underlying trust in human-agent interactions, specifically in the context of AAs and GenAI chatbots. A key assumption in Taddeo's trust model was that agents could categorize each other (human or AA), but recent examples showed this was not always the case. To address this, the model added a new "uncategorized agent" to account for situations where the agent's identity was unknown, particularly in electronic interactions.

The study emphasizes the complexity of trust in GenAI chatbots. It further explores this complexity, highlighting that GenAI's behaviors often stemmed from learned patterns and could not be easily quantified, unlike simpler AAs. This has implications for how trust is both established and maintained in these systems. Revised trust facets acknowledge that trust behaviors in GenAI might emerge from training data rather than explicit programming.

Additionally, facet 4 was expanded to include the role of training data in shaping an artificial agent's decision-making and trust expectations. Lastly, facet 5 considered the growing uncertainty regarding the awareness and identity of agents due to GenAI's ability to mimic human behaviors, while facet 6 remained unchanged, given its incorporation of neural networks.

Do We Need to Create a New Class?

The authors discussed the complexities introduced by GenAI in trust models, particularly in AA relationships. Four out of twelve trust subclasses were revised to address these complexities. Human-to-human trust relationships remain unchanged, but the roles of developers and training data were emphasized in GenAI trust dynamics. Trust in AAs often included an implicit trust in the developers and training data, even when the trustor might not be aware of these influences.

The analysis highlighted the risks of trusting unknown entities, especially in education and social media, where GenAI outputs might distort trust relationships. For example, teachers' trust in students might erode if GenAI-generated work included misinformation, while in social media, people might unknowingly trust manipulated content created by AAs. The revised model addressed these challenges by acknowledging the unseen roles developers and training data played in GenAI interactions.

Conclusion

In conclusion, the researchers highlighted the need to re-evaluate traditional trust models in the context of GenAI. The rise of AI-driven systems introduced complexities that challenged long-standing assumptions about trust, particularly regarding the evolving roles of humans and AAs.

The inclusion of developers' and training data in trust relationships underscores the importance of transparency and accountability. As AI technologies continue to evolve, incorporating trust models into the development process becomes essential to mitigate risks, ensuring that ethical considerations are addressed before harm occurs in societal applications.

Journal reference:
  • Wolf, M. J., Grodzinsky, F., & Miller, K. W. (2024). Generative AI and Its Implications for Definitions of Trust. Information15(9), 542–542. DOI: 10.3390/info15090542,  https://www.mdpi.com/2078-2489/15/9/542
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, September 09). How Is Generative AI Redefining Trust?. AZoAi. Retrieved on December 11, 2024 from https://www.azoai.com/news/20240909/How-Is-Generative-AI-Redefining-Trust.aspx.

  • MLA

    Nandi, Soham. "How Is Generative AI Redefining Trust?". AZoAi. 11 December 2024. <https://www.azoai.com/news/20240909/How-Is-Generative-AI-Redefining-Trust.aspx>.

  • Chicago

    Nandi, Soham. "How Is Generative AI Redefining Trust?". AZoAi. https://www.azoai.com/news/20240909/How-Is-Generative-AI-Redefining-Trust.aspx. (accessed December 11, 2024).

  • Harvard

    Nandi, Soham. 2024. How Is Generative AI Redefining Trust?. AZoAi, viewed 11 December 2024, https://www.azoai.com/news/20240909/How-Is-Generative-AI-Redefining-Trust.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.