Responsible Generative AI: Guidelines for Safeguarding Research Integrity

In a recent publication in the journal Nature, researchers discussed the “Living guidelines for responsible use of generative artificial intelligence (AI) in research” crafted by a group of international scientific institutions, global organizations, and policy advisers during summits at the University of Amsterdam's Institute for Advanced Study in April and June 2023.

Study: Responsible Generative AI: Guidelines for Safeguarding Research Integrity. Image credit: Generated using DALL.E.3
Study: Responsible Generative AI: Guidelines for Safeguarding Research Integrity. Image credit: Generated using DALL.E.3

Background

Approximately one year after the release of Chat Generated Pre-Trained Transformers (ChatGPT), there is a race among companies to advance 'generative' AI systems. These generative AI systems can generate text, videos, images, and even computer programs in response to prompts, accelerating information accessibility and technology development. However, they also pose significant risks.

Generative AI has the potential to inundate the internet with misinformation and deepfakes. Furthermore, the integrity of science itself is under threat from generative AI. It is already altering how scientists seek information, conduct research, and create and evaluate publications. The extensive adoption of commercial 'black box' AI tools in research might introduce biases and inaccuracies, thus posing a threat to the integrity of scientific knowledge.

The outcomes produced can skew scientific information while retaining an air of authority. While acknowledging the real risks, an outright ban on this technology seems impractical. So, the question arises: How can society harness the benefits of generative AI while mitigating its potential harms?

Governments worldwide have initiated efforts to regulate AI technologies, but effective and comprehensive legislation is still years away. In the long term, the effectiveness of legal restrictions or self-regulation remains uncertain. To effectively manage AI development, an ongoing process balancing expertise and independence is required. For this, experts in AI, generative AI, computer science, psychology, and social impacts have initiated the development of 'living guidelines' for generative AI's responsible use. These guidelines were crafted during summits at the University of Amsterdam's Institute for Advanced Study in collaboration with international scientific institutions, global organizations, and policy advisers.

Guidelines for the ethical use of generative AI

The current version of the guidelines outlines key principles for various stakeholders. The guidelines for researchers, reviewers, and journal editors are given below:

  • Human verification is essential due to the unreliability of generative AI. Critical steps in research, such as data interpretation, manuscript writing, and peer review, must involve human oversight.
  • Researchers must disclose their use of generative AI and related tools in scientific publications and presentations.
  • To uphold principles of open science, researchers should preregister generative AI use and make input or output data available upon publication.
  • Researchers heavily relying on generative AI should consider replicating their findings with different AI tools when applicable.
  • Journals should inquire about reviewers' use of generative AI in their assessments.

The guidelines for developers and companies working on generative AI are given below:

  • Full disclosure of training data, parameters, and techniques for large language models (LLMs) is necessary before launch by sharing with an independent scientific organization.
  • Training sets, ongoing adaptations, and algorithms should be shared with the independent auditing body.
  • Establish a portal for users to report inaccurate or biased responses, accessible to the auditing body.

The guidelines for research funding organizations are given below:

  • Research integrity policies should align with the living guidelines.
  • Evaluate research funding proposals with a human assessment rather than complete reliance on generative AI tools.
  • Funding organizations ought to openly acknowledge their utilization of generative AI when assessing research proposals.

These guidelines were co-developed with several experts and organizations to ensure responsible generative AI use.

Key principles of the utilization of guidelines

The summit participants initially established three fundamental principles for generative AI use in research, including transparency, accountability, and independent oversight. Based on these principles, six essential steps were identified.

  • Establish a scientific body for AI system audits tasked with assessing safety, validity, bias, and ethics.
  • Develop regularly updated benchmarks for evaluating AI tools, focusing on factors such as bias, hate speech, truthfulness, and equity.
  • Training data sets to prevent bias and undesirable content before AI systems are released.
  • Continuous revision and adaptation of AI system certification to reflect evolving performance and user feedback.
  • Ensure independence and interdisciplinary collaboration within the auditing body, involving specialists in various fields.
  • Promote inclusivity by including individuals from underrepresented groups who are more susceptible to bias and misinformation.

These measures aim to maintain the integrity and ethical use of generative AI in research while harnessing its potential benefits.

An auditor for generative AI

To ensure effectiveness, the proposed scientific body must possess independence, global reach, diverse expertise, and representation from the public and private sectors. Quality standards and certification for generative AI tools are essential, as is promoting inclusivity in training data.

The project's success depends on maintaining flexible guidelines amid rapid generative AI advancements. A monthly committee of about 12 experts should collaborate closely with the auditing body to assess and manage risks. Securing international funding is crucial to sustaining the guidelines. The auditing body requires at least one billion dollars for setup.

An interdisciplinary expert group should be formed in early 2024 to establish the framework and budget. Collaboration with tech firms is vital. While self-regulation is an option, auditing and regulation can build trust and encourage investment in an independent AI infrastructure fund. Managing memberships carefully is essential to safeguarding research independence. With evolving generative AI, the scientific community must lead in shaping responsible AI, marking the initial step in addressing these issues.

In summary, researchers explored the potential risks of generative AI in research and highlighted the guidelines and key principles for the responsible use of generative AI in research. They called for the creation of an independent scientific body that can assess and certify generative AI before it can cause damage to public trust and scientific research.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, October 25). Responsible Generative AI: Guidelines for Safeguarding Research Integrity. AZoAi. Retrieved on November 24, 2024 from https://www.azoai.com/news/20231025/Responsible-Generative-AI-Guidelines-for-Safeguarding-Research-Integrity.aspx.

  • MLA

    Lonka, Sampath. "Responsible Generative AI: Guidelines for Safeguarding Research Integrity". AZoAi. 24 November 2024. <https://www.azoai.com/news/20231025/Responsible-Generative-AI-Guidelines-for-Safeguarding-Research-Integrity.aspx>.

  • Chicago

    Lonka, Sampath. "Responsible Generative AI: Guidelines for Safeguarding Research Integrity". AZoAi. https://www.azoai.com/news/20231025/Responsible-Generative-AI-Guidelines-for-Safeguarding-Research-Integrity.aspx. (accessed November 24, 2024).

  • Harvard

    Lonka, Sampath. 2023. Responsible Generative AI: Guidelines for Safeguarding Research Integrity. AZoAi, viewed 24 November 2024, https://www.azoai.com/news/20231025/Responsible-Generative-AI-Guidelines-for-Safeguarding-Research-Integrity.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Hybrid Autoregressive Transformer Revolutionizes Visual Generation, Outperforming Diffusion Models