In a paper published in the journal Education Sciences, researchers addressed the impact of generative artificial intelligence (AI), like chat generative pre-trained transformer (ChatGPT), on assessments in higher education. They highlighted concerns about academic integrity, including plagiarism and cheating, and the potential for biased and misleading outputs from AI models trained on internet data.
The article proposed the against, avoid, and adopt (AAA) principle for assessment redesign, emphasizing the need to rethink approaches in light of generative AI advancements. It suggested that simply trying to police AI use may not adequately address the underlying assessment challenges.
Related Work
Past work has extensively explored the impact of generative AI, particularly since the launch of ChatGPT in late 2022. This technology has revolutionized various sectors, including business and healthcare, offering novel applications to streamline operations and improve productivity. Within education, especially tertiary institutions, generative AI has reshaped learning, teaching, and assessment practices.
Students can now access tools like ChatGPT to generate assignments across diverse subjects. However, this poses challenges related to academic integrity, prompting institutions to develop policies and detection mechanisms. While debates continue regarding the regulation of generative AI in assessments, there's a need to understand why students misuse these tools and explore opportunities for assessment redesign in light of generative AI's capabilities and risks.
AI in Education: Dynamics
Since 2023, researchers have extensively explored the potential benefits of integrating generative AI tools into educational assessments. They have identified clusters of affordances such as accessibility, personalization, automation, and interactivity, where generative AI can provide timely assistance, personalized learning recommendations, and streamlined generation of instructional materials. Moreover, efforts have been made to automate traditional assessment practices, transitioning from discrete to continuous and adaptive methods, to enhance feasibility and meaningfulness in learning contexts.
However, generative AI also presents challenges and ethical implications alongside these benefits. Concerns include the lack of explainability and transparency in AI processes, academic integrity risks, biases in AI-generated responses, overdependence on AI, digital divide issues, and privacy and security concerns. These challenges emphasize the importance of careful consideration and ethical oversight in integrating generative AI into educational assessments.
Academic Integrity Concerns
The emergence of generative AI in tertiary education has raised significant concerns about academic integrity. Moorhouse et al. highlighted the development of policies and guidelines by top-ranking tertiary institutions addressing these concerns. Despite academic integrity issues before the advent of generative AI, the widespread availability and accessibility of such tools have made academic dishonesty more prevalent. Detecting AI-generated content presents significant challenges, necessitating a deeper understanding of the underlying issues for effective assessment redesign in tertiary education.
Researchers analyzed students' intentions using generative AI through the Theory of Planned Behaviour, which considers attitudes toward cheating, subjective norms, and perceived behavioral control. Studies have shown that perceptions of generative AI's benefits positively influence attitudes towards academic integrity. Moreover, assessment practices in tertiary education serve multiple purposes, including support for learning, accountability, and certification. Understanding the balance between these purposes is crucial for designing assessments that are fit for the future.
Additionally, the types of assessment outputs need careful consideration, especially regarding the integration of generative AI and the alignment with program learning outcomes. Furthermore, managing the assessment load is essential to prevent surface learning and mitigate the risk of academic dishonesty, particularly with the ease of access to generative AI tools. These insights provide a foundation for addressing academic integrity challenges and reshaping assessment practices in tertiary education.
Navigating Generative AI
The AAA principle offers a structured approach to navigating the complexities introduced by generative AI in tertiary education assessments. Delineating assessments into three distinct clusters—against, avoid, and adopt—provides educators with a framework to integrate generative AI while safeguarding academic integrity strategically. This principle acknowledges the transformative potential of generative AI while emphasizing the importance of maintaining assessment quality and integrity in an evolving educational landscape.
In the against cluster, assessments strictly prohibit generative AI, particularly in tasks that assess lower-order skills or where generative AI may not be suitable. Examples include traditional examinations conducted under invigilation and interactive sessions such as debates and viva voce examinations. This approach underscores the significance of authentic student engagement and proficiency in fundamental skills, fostering a learning environment prioritizing human interaction and critical thinking over technological reliance.
Conversely, the Avoid cluster encompasses assessments designed to address the limitations of generative AI by focusing on tasks that challenge higher-order cognitive abilities and emphasize human interaction and contextual understanding.
Functions within this cluster, such as performance assessments and portfolio evaluations, require nuanced responses that surpass the capabilities of current generative AI technology. By leveraging human-centric assessment approaches, the Avoid cluster ensures assessments remain resilient to generative AI influence, maintaining their integrity and validity in assessing student learning outcomes.
Conclusion
In conclusion, as generative AI gained prominence, educational institutions grappled with upholding academic integrity. The AAA principle offered a framework for aligning assessment practices with the evolving landscape of AI, emphasizing a balance between leveraging its potential and maintaining integrity.
Categorizing assessments into against, avoid, and adopt clusters guided educators in integrating AI strategically. It required understanding its limitations and opportunities, ensuring fair assessment practices. Furthermore, AI integration promised personalized learning experiences and transformed teaching methods, heralding a new era in education.