Generative AI is a branch of artificial intelligence that involves training models to generate new and original content, such as images, text, music, and video, based on patterns learned from existing data.
A recent article in Education Sciences addresses the impact of generative AI on higher education assessments, highlighting academic integrity concerns. Researchers propose the "against, avoid, and adopt" (AAA) principle for assessment redesign to balance AI's potential with maintaining academic standards.
This study demonstrated the potential of T5 large language models (LLMs) to translate between drug molecules and their indications, aiming to streamline drug discovery and enhance treatment options. Using datasets from ChEMBL and DrugBank, the research showcased initial success, particularly with larger models, while identifying areas for future improvement to optimize AI's role in medicine.
Researchers advocate for a user-centric evaluation framework for healthcare chatbots, emphasizing trust-building, empathy, and language processing. Their proposed metrics aim to enhance patient care by assessing chatbots' performance comprehensively, addressing challenges and promoting reliability in healthcare AI systems.
This study explores the ethical dimensions of employing AI, particularly ChatGPT, for political microtargeting, offering insights into its effectiveness and ethical dilemmas. Through empirical investigations, it unveils the persuasive potency of personalized political ads tailored to individuals' personality traits, prompting discussions on regulatory frameworks to mitigate potential misuse.
Explored in a Nature article, this research investigates ChatGPT's integration into programming education, emphasizing factors shaping learners' problem-solving effectiveness. It underscores the importance of AI literacy, programming knowledge, and cognitive understanding, offering insights for educators and learners amidst the AI-driven educational transformation.
Digital Science introduces Dimensions Research GPT and Dimensions Research GPT Enterprise, enhancing research discovery on ChatGPT with data from millions of publications, grants, clinical trials, and patents.
This study from Stanford University delves into the use of intelligent social agents (ISAs), such as the chatbot Replika powered by advanced language models, by students dealing with loneliness and suicidal thoughts. The research, combining quantitative and qualitative data, uncovers positive outcomes, including reduced anxiety and increased well-being, shedding light on the potential benefits and challenges of employing ISAs for mental health support among students facing high levels of stress and loneliness.
Researchers conducted an omnibus survey with 1150 participants to delve into attitudes towards occupations based on their likelihood of automation, uncovering a general discomfort with AI management. The findings, emphasizing demographic influences and unexpected correlations, contribute to a nuanced understanding of public perceptions surrounding AI, shedding light on distinctive attitudes compared to other technological innovations and advocating for a thoughtful approach to AI integration in various occupational domains.
Researchers showcase the prowess of MedGAN, a generative artificial intelligence model, in drug discovery. By fine-tuning the model to focus on quinoline-scaffold molecules, the study achieves remarkable success, generating thousands of novel compounds with drug-like attributes. This advancement holds promise for accelerating drug design and development, marking a significant stride in the intersection of artificial intelligence and pharmaceutical innovation.
Researchers discuss the transformative role of Multimodal Large Language Models (MLLMs) in science education. Focusing on content creation, learning support, assessment, and feedback, the study demonstrates how MLLMs provide adaptive, personalized, and multimodal learning experiences, illustrating their potential in various educational settings beyond science.
This paper delves into the transformative impact of machine learning (ML) in scientific research while highlighting critical challenges, particularly in COVID-19 diagnostics using AI-driven algorithms. The study underscores concerns about misleading claims, flawed methodologies, and the need for standardized guidelines to ensure credibility and reproducibility. It addresses issues such as data leakage, inadequate reporting, and overstatement of findings, emphasizing the importance of proper training and standardized methodologies in the rapidly evolving field of health-related ML.
This article proposes the Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy (GREAT PLEA) ethical principles for generative AI applications in healthcare. Drawing inspiration from existing military and healthcare ethical principles, the GREAT PLEA framework aims to address ethical concerns, protect clinicians and patients, and guide the responsible development and implementation of generative AI in healthcare settings.
This study, published in AISeL, explores the user experience of integrating AI technologies like ChatGPT into knowledge work. Through interviews with 31 users, distinct phases were identified, ranging from pre-use curiosity and anxiety to the establishment of a tight intertwinement with ChatGPT as a collaborative assistant. The findings emphasize the emotional dimensions of AI adoption and raise important considerations for individuals, organizations, and society regarding potential dependencies, deskilling, and the evolving role of AI in the workplace.
This paper explores the pivotal role of generative AI in providing automated feedback to foster human creativity in innovation. The researchers conducted a series of experiments, utilizing generative AI to offer visual and numeric feedback in real-time. Preliminary insights indicate that visual feedback enhances perceived originality, imagination, and task competence, shedding light on the potential of AI-driven feedback in augmenting creative endeavors.
This study examined how people perceive advice from generative AI, exemplified by ChatGPT, on societal and personal challenges. The research, involving 3308 participants, revealed that while AI advisors were perceived as less competent when their identity was transparent, positive experiences mitigated this aversion, highlighting the potential value of clear and understandable AI recommendations for addressing real-world challenges.
Researchers discussed the development of "Living guidelines for responsible use of generative artificial intelligence (AI) in research." These guidelines, crafted by a collaboration of international scientific institutions, organizations, and policy advisers, aim to address the potential risks posed by generative AI and provide key principles for its responsible use in scientific research.
This review explores the landscape of social robotics research, addressing knowledge gaps and implications for business and management. It highlights the need for more studies on social robotic interactions in organizations, trust in human-robot relationships, and the impact of virtual social robots in the metaverse, emphasizing the importance of balancing technology integration with societal well-being.
This article discusses the electricity consumption of artificial intelligence (AI) technologies, focusing on the training and inference phases of AI models. With AI's rapid growth and increasing demand for AI chips, the study examines the potential impact of AI on global data center energy use and the need for a balanced approach to address environmental concerns while harnessing AI's potential.
This paper explores the increasing presence of autonomous artificial intelligence (AI) systems in healthcare and the associated concerns related to liability, regulatory compliance, and financial aspects. It discusses how evolving regulations, such as those from the FDA, aim to ensure transparency and accountability, and how payment models like Medicare Physician Fee Schedule (MPFS) are adapting to accommodate autonomous AI integration.
This study delves into the ongoing debate about whether Generative Artificial Intelligence (GAI) chatbots can rival human creativity. The findings indicate that GAI chatbots can generate original ideas comparable to humans, emphasizing the potential for synergy between humans and AI in the creative process, with chatbots serving as valuable creative assistants.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.