Tackling Biases in AI-driven Conservation with Ethical Safeguards

In a paper published in the journal Humanities and Social Sciences Communications, researchers interviewed and analyzed 30,000 responses from chat generative pre-trained transformer (ChatGPT) regarding ecological restoration. Findings revealed a reliance on male academics from United States (US) universities, overlooking evidence from lower-income countries and Indigenous restoration experiences.

ChatGPT’s responses (n = 10,000) relied on a narrow expertise from the Global North, b excluded expertise from low- and lower-middle-income countries, and c neglected information on countries of with restoration pledges. Countries with no information provided by the chatbot are shown in grey. More specific information on countries can be found in the Supplementary materials. Study: https://www.nature.com/articles/s41599-024-02720-3
ChatGPT’s responses (n = 10,000) relied on a narrow expertise from the Global North, b excluded expertise from low- and lower-middle-income countries, and c neglected information on countries of with restoration pledges. Countries with no information provided by the chatbot are shown in grey. More specific information on countries can be found in the Supplementary materials. Study: https://www.nature.com/articles/s41599-024-02720-3

The chatbot primarily focused on planting and reforestation, neglecting non-forest ecosystems and species diversity. It underscores how artificial intelligence (AI) driven knowledge production reinforces Western science biases. Researchers stress the need for safeguards in chatbot development to address the global environmental crisis inclusively.

Related Work

Past research underscores the growing impact of AI on global conservation efforts, with AI-driven techniques increasingly used to enhance environmental monitoring and management. However, concerns persist regarding potential biases and misrepresentations in AI innovations, particularly in the context of conservation science. These concerns include amplifying existing inequalities in decision-making processes and the possibility of perpetuating misleading information, hindering effective conservation strategies. Addressing these challenges ensures that AI technologies contribute to worldwide conservation efforts.

ChatGPT Ecological Restoration Analysis

This study comprehensively analyzed ChatGPT's responses to a 30-question interview, focusing on distributive, recognition, procedural, and epistemic justice dimensions in AI-generated information about ecological restoration. Researchers utilized the international principles & standards for environmental restoration to formulate thematic questioning areas, knowledge systems, stakeholder engagements, and technical approaches.

Data collection involved posing each question 1000 times, resulting in a dataset of 30,000 answers collected from June to November 2023 and using advanced textual, logical, and anthropological analysis system ti for macintos (ATLAS ti Mac (Version 22.0.6.0)); the analysis examined factors such as geographical representation, expertise validation, organizational engagements, and sentiment analysis of technical approaches.

The knowledge systems analysis focused on understanding how ChatGPT incorporates diverse dimensions of restoration knowledge, including experts, affiliations, literature, experiences, and projects. Researchers assessed the geographical representation, expertise validation, and distribution of the mentioned countries based on income level and region-based categories. They analyzed stakeholder engagements to recognize influential organizations and assess community-led involvement through social network analysis.

Furthermore, technical approaches analysis examined ChatGPT's treatment of ecosystem diversity, plant life forms, restoration approaches, and environmental outcomes. It employed sentiment analysis to gauge sentiments associated with each technical approach and its ecological consequences.

ChatGPT's Geographic Knowledge Bias

ChatGPT's responses on ecological restoration expertise reflect a significant reliance on sources from the Global North, particularly the United States, Europe, Canada, and Australia, with limited representation from low- and lower-middle-income countries. Geographical analysis reveals disparities, with high-income countries being overrepresented compared to their lower-income counterparts.

Moreover, the chatbot needs more information from countries with official restoration pledges, particularly those in Africa. Additionally, the chatbot heavily relies on content produced by male researchers, predominantly from the United States, with inaccuracies noted in the representation of experts. These findings underscore the need for more inclusive and accurate sourcing of restoration knowledge by AI-driven systems like ChatGPT.

Community Restoration Overlooked Analysis

ChatGPT primarily focuses on well-established international organizations and government agencies from high-income nations, overshadowing indigenous and community-led restoration efforts. Only a small fraction of the listed organizations (2%) represent indigenous and community initiatives, often marginalized within the restoration network analysis. Researchers portrayed these grassroots efforts generically, needing more specific details and context about their diverse engagements and experiences across different landscapes.

Restoration Bias Analysis

ChatGPT exhibits biases towards North American and European sources, emphasizing tree planting and forest-focused restoration interventions while overlooking holistic techniques and diverse ecosystems. This bias perpetuates environmental injustices by reinforcing power imbalances in conservation decision-making and knowledge production.

Furthermore, the chatbot needs to pay more attention to Indigenous and community-led restoration efforts and consider the importance of non-forest ecosystems and non-tree plant species. These limitations highlight the urgent need for more inclusive and accurate representation in AI-generated restoration information to address colonial legacies and promote equitable conservation practices.

Ethical AI Conservation

Ensuring responsible contributions from AI chatbots for just conservation requires urgent measures to prioritize ethical practices, including disclosing source and authorship and adopting decolonial formulations that embrace diverse histories and worldviews. Negotiating knowledge systems within digital practices should consider gender, race, and ethnicity considerations, drawing on insights from co-production mechanisms for inclusive knowledge sharing.

Additionally, addressing data access, control, and ownership issues, particularly regarding community and indigenous knowledge, demands reworking data sourcing and modeling based on specific contexts and demands. These efforts challenge ethical approaches to data governance, necessitating safeguards in expanding large language models to promote transparency and accountability in embracing environmental justice perspectives.

Conclusion

To sum up, ensuring responsible contributions from AI chatbots for just conservation necessitates urgent action to prioritize ethical practices. It includes disclosing source and authorship, adopting decolonial formulations, and considering gender, race, and ethnicity considerations in negotiating knowledge systems.

Addressing data access, control, and ownership issues, especially concerning community and indigenous knowledge, requires reworking data sourcing and modeling based on specific contexts and demands. These efforts challenge existing ethical approaches to data governance and emphasize the need for safeguards in expanding large language models to promote transparency and accountability in embracing environmental justice perspectives.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, February 09). Tackling Biases in AI-driven Conservation with Ethical Safeguards. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20240209/Tackling-Biases-in-AI-driven-Conservation-with-Ethical-Safeguards.aspx.

  • MLA

    Chandrasekar, Silpaja. "Tackling Biases in AI-driven Conservation with Ethical Safeguards". AZoAi. 22 December 2024. <https://www.azoai.com/news/20240209/Tackling-Biases-in-AI-driven-Conservation-with-Ethical-Safeguards.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Tackling Biases in AI-driven Conservation with Ethical Safeguards". AZoAi. https://www.azoai.com/news/20240209/Tackling-Biases-in-AI-driven-Conservation-with-Ethical-Safeguards.aspx. (accessed December 22, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Tackling Biases in AI-driven Conservation with Ethical Safeguards. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20240209/Tackling-Biases-in-AI-driven-Conservation-with-Ethical-Safeguards.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Tackling Text-to-Image AI Flaws