In a recent article published in the journal Technovation, researchers explored the rapidly evolving landscape of generative large language models (LLMs), with a specific focus on the widely used chat generative pre-training transformer (ChatGPT) system. Employing a data-driven approach, they aimed to identify and analyze the tasks users assign to these generative LLMs, unveiling a broad spectrum of applications and showcasing their potential to revolutionize business processes and services.
Background
Generative LLMs represent a significant leap forward in artificial intelligence (AI), bringing capabilities previously reserved for human intelligence into the realm of machines. Built on probabilistic algorithms, these models serve as the cornerstone of natural language processing techniques, enabling the understanding and generation of human language. The term "large" refers to the extensive number of parameters required to train these models, while "generative" signifies their ability to produce new content autonomously.
With applications spanning various domains, from summarization to translation, generative LLMs often engage users through conversational interfaces, mimicking human-like interactions. This approach fosters user engagement and contributes to the widespread adoption of these systems.
According to the technology acceptance model, factors such as perceived ease of use and usefulness significantly influence user acceptance of technology. Generative LLMs have witnessed a substantial impact, with user numbers surpassing 100 million as of January 2023. Leading this progress is ChatGPT, an OpenAI conversational system built on the GPT model. Other notable generative LLMs, including Microsoft's Bing, Google's PaLM, and Meta's LLaMA, offer unique strengths.
About the Research
In this paper, the authors aimed to address a gap in the literature by examining how users engage with generative LLMs, specifically focusing on ChatGPT. Their research question, "Which tasks are users asking generative LLMs to perform?" guided a case study conducted using X data, sourced from over 3,821,843 tweets related to ChatGPT between November 2022 and May 2023.
To comprehensively analyze the data, the study employed two natural language processing techniques: named entity recognition (NER) algorithms for identifying specific tasks users assigned to ChatGPT, and bidirectional encoder representations from transformers topic (BERTopic) for clustering these tasks into semantically similar groups. This combined approach provided a nuanced understanding of ChatGPT's usage patterns, revealing insights into its potential applications across diverse industries and emerging domains.
Research Findings
The analysis of over 3.8 million tweets revealed that users were asking generative LLMs, particularly ChatGPT, to perform a wide spectrum of tasks, ranging from programming assistance to creative content generation. The authors identified 31,747 unique tasks, which were then clustered into 389 topics. The top 10 topics, in terms of the number of tweets, highlighted the versatility of ChatGPT, with users employing it for tasks such as writing code, articles, poems, answering questions, and generating stories and emails.
Furthermore, the outcomes showed that users were employing generative AI for tasks that required a high level of abstraction, indicating an initial exploratory stage of technology adoption. This suggested that users were driven by curiosity and a desire to explore the capabilities of generative LLMs. Moreover, the paper highlighted that ChatGPT was being perceived as a potential game changer in the search engine market, posing a threat to established players like Google.
Applications
This research has implications for both theory and practice. The quantitative examination of the tasks users asked of generative LLMs can enable practitioners to study how users interacted with these emerging technologies, complementing the existing scientific discourse that had largely focused on opinion papers and qualitative research. This information can also assist companies in exploring the potential applications of LLMs in their specific contexts and identifying the priorities of early adopters of the technology.
From a theoretical perspective, the study contributes to the ongoing discourse on the impact of AI on innovation management. The researchers proposed a research agenda that intersected the six identified areas of ChatGPT application (human resources, programming, social media, office automation, search engines, and education) with the four stages of the innovation process: idea generation, screening/idea selection, development, and diffusion/sales/marketing. This framework can guide future research in exploring the opportunities and challenges presented by generative LLMs in the context of innovation management.
Conclusion
Overall, the research provided a data-driven exploration of the evolving landscape of generative LLMs, with a focus on the tasks users assigned to ChatGPT. Its findings highlighted the versatility of these models, which were utilized across a wide range of business areas, from programming assistance to creative content generation.
The proposed research agenda offered a roadmap for innovation management scholars to delve deeper into the implications of generative LLMs for the various stages of the innovation process. Moving forward, as generative LLMs continue to advance and become more widely adopted, understanding the tasks users asked of these systems would be crucial for researchers and practitioners alike.
Journal reference:
- Chiarello, F., Giordano, V., Spada, I., et, al. Future applications of generative large language models: A data-driven case study on ChatGPT. Technovation, 2024, 133, 103002. https://doi.org/10.1016/j.technovation.2024.103002, https://www.sciencedirect.com/science/article/pii/S016649722400052X