In a paper published in the journal AIS Electronic Library, researchers delved into the pivotal role of human creativity in fostering innovative ideas. Despite generating numerous ideas, only a few received feedback due to the resource-intensive nature of creating feedback in innovation.
This study delved into the potential of generative artificial intelligence (AI) to offer automated feedback, poised to revolutionize creative endeavors, addressing this challenge. The researchers facilitated collaborations between humans and generative AI through an experimental series to nurture ideas, drawing from dual-coding and media synchronicity theories.
They focused on conceptualizing numerical and visual feedback to surmount cognitive barriers, strategically manipulating feedback modalities and timing to personalize interactions. The research offered valuable insights into the optimal co-creative arrangements between humans and generative AI, shedding light on the circumstances favoring specific collaborative approaches in innovation.
Background
The essence of business growth relies on creativity and innovation, with individuals contributing ideas through imagination. Ideation involves generating novel and valuable concepts crucial for innovation. While creativity fuels idea generation, innovation encompasses implementing these ideas. The ideator is not typically responsible for idea implementation in various innovation settings. The ideation process involves three key phases: idea generation, development, and evaluation.
Research primarily concentrates on idea generation and, to a lesser extent, evaluation, with feedback playing a critical role in enhancing idea quality. However, creating sufficient feedback is time-consuming and resource-intensive, often posing a bottleneck in innovation.
Exploring Feedback in Creative Imagination
The researchers conducted a series of three experiments to explore the impact of feedback on creative imagination. In Study One, a 2x2 experimental design tests hypotheses H1a to H1c by presenting no feedback, numeric feedback, visual feedback, or a combination of both to users after submitting their ideas. Study Two replicates Study One but focuses solely on visual feedback, using transformer-based language models to predict feedback prompts for visualization. Study Three delves into the creative turn-taking process, extending the platform to allow users to request feedback at will or receive automatic feedback every minute, resulting in a 3x2 experimental design.
The researchers developed an online platform featuring two AIs trained on a large-scale ideation dataset. One AI predicts idea quality as a numeric score, utilizing a Distillated Bidirectional Encoder Representations from Transformers (DistilBERT) classifier trained on German text data collected from innovation challenges over eight years. Using the Dataset for Adapting Language to Generate Expressions (DALLE) 2 API by OpenAI, the other AI generates visual feedback by combining a primary prompt prefix with the participant's idea to request an image.
These experiments involve participants contributing ideas to a real-world ideation challenge from the large-scale ideation set. The challenge focuses on improving a crowded commuter rail, aligning with criteria such as suitability for the masses, feasibility, non-luxuriousness, and sustainability. Participants, familiar with potential public transport problems, are asked to contribute ideas, mimicking tasks in crowdsourcing scenarios.
Throughout all three studies, researchers measure creative performance using Amabile's Consensual Assessment Technique, alongside evaluating controls encompassing self-reported creativity and perception of feedback. These evaluations utilize scales measuring originality, imagination, inspiration, competence, autonomy, and enjoyment. The sample comprises German native speakers with stringent criteria ensuring data quality, including a high acceptance rate, platform submissions, and successful attention checks.
In their investigation, the researchers refrained from additional screening to encompass diverse controls like innovation knowledge or self-assessed creativity. This comprehensive approach aims to elucidate the nuanced dynamics of feedback on creative imagination within a German-speaking sample across varied experimental conditions.
Preliminary Insights on Feedback Perception
In an online pre-study, researchers recruited 20 participants from Prolific, emphasizing fluency in German as their primary language and residency in the DACH region. The participants, with an average age of 32.0 (SD = 9.74) and 25% female, completed the experiment in a median of 10:44 minutes. Before the task explanation, participants indicated their proficiency in speaking German and familiarity with innovation, followed by assessing self-reported creativity, ensuring independence from subsequent tasks or manipulations.
Upon introducing the task, researchers divided participants into four treatment groups: one receiving no feedback (control group) and three others receiving visual, numeric, or combined feedback. Each participant engaged in an innovation competition presented on the experiment platform, submitting at least one idea. To submit an idea, participants in the treatment groups had to request feedback by clicking "Get numeric (/visual) feedback." Throughout the experiment, participants in the treatment groups generated 21 ideas and received 14 visual and 12 numeric feedback, while the control group developed five ideas.
Post-task, participants in treatment conditions provided feedback perceptions via scales with satisfactory Cronbach's alpha scores: originality (0.78), imagination (0.83), and inspiration (0.89). Preliminary analysis, acknowledging the study's low power, revealed notably higher perceived originality for visual feedback (t=4.39, p=0.003) and a tentative increase in imagination (t=1.64, p=0.14).
Additionally, participants rated their perception of the creative task, yielding Cronbach's alpha scores of 0.93 for creative task enjoyment, 0.7 for creative task autonomy, and 0.76 for creative task competence. The "visual feedback" group reported higher task competence (t=2.97, p=0.019), while the effect weakened in the "numeric and visual feedback" group (t=2.15, p=0.064). Researchers couldn't establish statistical significance from the observations despite noticing increased task enjoyment and creative autonomy with access to numeric or visual feedback.
Furthermore, participants successfully identified the feedback present on the platform as a manipulation check. However, due to the pre-study's nature focusing on scales, infrastructure, and manipulation, hypothesis testing and assessment of idea quality were not conducted.
Conclusion
In summary, this paper explores human-AI collaboration in the creative process, specifically focusing on AI-generated feedback. Offering real-time AI feedback during experiments and observing user behavior on the platform contributes to augmented innovation research by extending beyond text generation to introduce visual and numeric feedback. This study aims to understand how these AI-generated feedback modalities influence idea evaluation and human creativity while expanding empirical insights into collaborative dynamics between humans and AI in creative labor, paving the way for further exploration into incorporating textual feedback to enhance this investigation.