How Large Language Models Are Changing the Face of Academic Research

A global survey reveals how AI-powered tools like ChatGPT are revolutionizing academic publishing while raising pressing questions about privacy, bias, and integrity in research practices.

Study: Use of large language models as artificial intelligence tools in academic research and publishing among global clinical researchers. Image Credit: metamorworks / Shutterstock

In an article recently published in the journal Nature, authors explored global researchers' perceptions of large language models (LLMs) in academic research, focusing on their usage, benefits, and ethical concerns.

Surveying 226 researchers across 59 countries and 65 specialties, they highlighted how LLMs assist in tasks like editing, writing, and literature review while revealing low disclosure rates of LLM usage (40.5%) and ethical dilemmas surrounding acknowledgment.

The study emphasized the need for comprehensive guidelines to regulate LLM use, ensure ethical practices, and address potential misuse in academic research.

Background

LLMs have transformed natural language processing (NLP) and artificial intelligence (AI), offering advanced capabilities for text generation and analysis.

The introduction of the transformer architecture in 2017, followed by models like bidirectional encoder representations from transformers (BERT) and generative pre-trained transformers (GPTs), democratized access to NLP tools, enabling diverse applications.

Tools such as ChatGPT gained widespread popularity, aiding tasks like literature review, manuscript drafting, and data extraction in academic research.

Despite their advantages, LLMs face ethical challenges, including generating fake citations, amplifying biases, and raising concerns about authorship integrity.

These issues have contributed to an "AI-driven infodemic," posing risks to public trust in research. Previous studies largely highlight LLM potential but lack insight into researchers' attitudes and practices.

This study bridged that gap by surveying medical researchers in Harvard’s global clinical scholars research training (GCSRT) program, analyzing their awareness, usage, and perspectives on LLMs to inform ethical guidelines and publication policies. Respondents with a higher number of PubMed-indexed publications were significantly more likely to be aware of LLMs (p < 0.001).

Methodology and Study Design

This global cross-sectional survey was conducted between April and June 2024 among medical and paramedical researchers trained at Harvard’s GCSRT program.

The authors aimed to assess researchers' awareness of LLMs, their current use in academic research, and their potential future impact and ethical implications. Participants included researchers from diverse specialties, career stages, and over 50 countries, making them an ideal group for this analysis.

Eligible participants were GCSRT program members from 2020 to 2024, proficient in English, and accessible via unofficial class WhatsApp groups.

Researchers outside this scope, non-medical researchers, and those without English proficiency were excluded.

A structured questionnaire with four sections—background, awareness of LLMs, impact of LLMs, and future policy—was distributed via Google Forms, and responses from 226 researchers (41% response rate) across 59 countries were analyzed.

The survey employed a mix of descriptive statistics, thematic analysis, and statistical tests, including analysis of variance (ANOVA) and Chi-squared tests, using Stata multiprocessor (MP) 17.0. Ethical approval was obtained, and all respondents provided informed consent, ensuring data confidentiality.

Results and Discussion

Among respondents, 87.6% were aware of LLMs, with awareness strongly correlated with a higher number of PubMed-indexed publications.

Most participants were somewhat or moderately familiar with LLMs, and 18.7% had previously used these tools, primarily for grammatical corrections (64.9%), writing (45.9%), and editing (45.9%).

Respondents generally anticipated LLMs to have a transformative impact on academic publishing, particularly in areas such as grammar correction, formatting, and editing. However, tasks like methodology, journal selection, and ideation were perceived as less likely to be significantly influenced, with over 70% rating their impact as low to moderate.

Despite this optimism, ethical concerns were highlighted, including potential biases, plagiarism, and privacy risks. Approximately 8% of respondents specifically voiced ethical apprehensions, while 78.3% supported regulatory measures such as journal policies and AI review boards.

Interestingly, 81.3% of respondents who were aware of LLMs had never used them in their research, contrasting with other studies where usage was more prevalent. Reasons for not disclosing AI use included limited understanding of its integration and skepticism about its role in research.

Overall, 50.8% viewed LLMs positively, while 32.6% remained uncertain. The results underlined the importance of balancing AI's efficiency with ethical considerations, suggesting the need for robust oversight to guide its application in research and publishing.

Conclusion

In conclusion, the authors explored the global perspectives of researchers on the use of LLMs in academic research, shedding light on both their potential benefits and ethical concerns.

While most participants acknowledged the transformative role of LLMs in tasks such as editing and writing, concerns about privacy, bias, and authorship integrity remained prevalent.

The findings highlighted the urgent need for ethical guidelines and regulatory frameworks to govern LLM usage in academic publishing.

Despite recognizing the efficiency of LLMs, researchers emphasized the necessity of safeguarding against misuse through policies like AI review boards and journal-specific regulations.

As LLMs continue to shape academic practices, balancing their benefits with a responsible approach to mitigate risks is crucial, ensuring ethical and transparent use in the research community. The study's findings also emphasize the need for consistent acknowledgment of LLM usage to maintain accountability and transparency in research.

Journal reference:
  • Mishra et al., 2024. Use of large language models as artificial intelligence tools in academic research and publishing among global clinical researchers. Scientific Reports14(1). DOI: 10.1038/s41598-024-81370-6, https://www.nature.com/articles/s41598-024-81370-6
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, January 12). How Large Language Models Are Changing the Face of Academic Research. AZoAi. Retrieved on January 13, 2025 from https://www.azoai.com/news/20250112/How-Large-Language-Models-Are-Changing-the-Face-of-Academic-Research.aspx.

  • MLA

    Nandi, Soham. "How Large Language Models Are Changing the Face of Academic Research". AZoAi. 13 January 2025. <https://www.azoai.com/news/20250112/How-Large-Language-Models-Are-Changing-the-Face-of-Academic-Research.aspx>.

  • Chicago

    Nandi, Soham. "How Large Language Models Are Changing the Face of Academic Research". AZoAi. https://www.azoai.com/news/20250112/How-Large-Language-Models-Are-Changing-the-Face-of-Academic-Research.aspx. (accessed January 13, 2025).

  • Harvard

    Nandi, Soham. 2025. How Large Language Models Are Changing the Face of Academic Research. AZoAi, viewed 13 January 2025, https://www.azoai.com/news/20250112/How-Large-Language-Models-Are-Changing-the-Face-of-Academic-Research.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.