AI is employed in healthcare for various applications, including medical image analysis, disease diagnosis, personalized treatment planning, and patient monitoring. It utilizes machine learning, natural language processing, and data analytics to improve diagnostic accuracy, optimize treatment outcomes, and enhance healthcare delivery, leading to more efficient and effective patient care.
Researchers emphasize the growing significance of radar-based human activity recognition (HAR) in safety and surveillance, highlighting its advantages over vision-based sensing in challenging conditions. The study reviews classical Machine Learning (ML) and Deep Learning (DL) approaches, with DL's advantage in avoiding manual feature extraction and ML's robust empirical basis. A comparative study on benchmark datasets evaluates performance and computational efficiency, aiming to establish a standardized assessment framework for radar-based HAR techniques.
Researchers delve into the intricate relationship between speech pathology and the performance of deep learning-based automatic speaker verification (ASV) systems. The research investigates the influence of various speech disorders on ASV accuracy, providing insights into potential vulnerabilities in the systems. The findings contribute to a better understanding of speaker identification under diverse conditions, offering implications for applications in healthcare, security, and biometric authentication.
This publication analyzes challenges in the European AI Act (AIA), offering insights applicable to subsequent versions. The research, focused on the April 2021 draft, categorizes critiques into regulation, compliance, and anticipated impact themes. Notable concerns include the AIA's broad scope, ambiguous wording, unrealistic provider requirements, and potential negative effects on innovation and industry, providing valuable guidance for further AI regulation research.
Researchers introduce a pioneering framework leveraging IoT and wearable technology to enhance the adaptability of AR glasses in the aviation industry. The multi-modal data processing system, employing kernel theory-based design and machine learning, classifies performance, offering a dynamic and adaptive approach for tailored AR information provision.
This study critically evaluates the Cigna StressWaves Test (CSWT), an AI-based tool integrated into Cigna's stress management toolkit, claiming 'clinical grade' assessment. The research, conducted with 60 participants, reveals significant concerns about CSWT's reliability and validity, challenging its efficacy. The study underscores the importance of stringent validation processes for AI-driven health tools, particularly in mental health assessment, and highlights challenges associated with speech-based health measures.
The paper addresses concerns about the accuracy of AI-driven chatbots, focusing on large language models (LLMs) like ChatGPT, in providing clinical advice. The researchers propose the Chatbot Assessment Reporting Tool (CHART) as a collaborative effort to establish structured reporting standards, involving a diverse group of stakeholders, from statisticians to patient partners.
Researchers have explored the feasibility of using a camera-based system in combination with machine learning, specifically the AdaBoost classifier, to assess the quality of functional tests. Their study, focusing on the Single Leg Squat Test and Step Down Test, demonstrated that this approach, supported by expert physiotherapist input, offers an efficient and cost-effective method for evaluating functional tests, with the potential to enhance the diagnosis and treatment of movement disorders and improve evaluation accuracy and reliability.
Researchers introduced Relay Learning, a novel deep-learning framework designed to ensure the physical isolation of clinical data from external intruders. This secure multi-site deep learning approach, Relay Learning, significantly enhances data privacy and security while demonstrating superior performance in various multi-site clinical settings, setting a new standard for AI-aided medical solutions and cross-site data sharing in the healthcare domain.
Researchers outlined six principles for the ethical use of AI and machine learning in Earth and environmental sciences. These principles emphasize transparency, intentionality, risk mitigation, inclusivity, outreach, and ongoing commitment. The study also highlights the importance of addressing biases, data disparities, and the need for transparency initiatives like explainable AI (XAI) to ensure responsible and equitable AI-driven research in these fields.
This article explores the challenges and approaches to imparting human values and ethical decision-making in AI systems, with a focus on large language models like ChatGPT. It discusses techniques such as supervised fine-tuning, auxiliary models, and reinforcement learning from human feedback to imbue AI systems with desired moral stances, emphasizing the need for interdisciplinary perspectives from fields like cognitive science to align AI with human ethics.
A recent research publication explores the profound impact of artificial intelligence (AI) on urban sustainability and mobility. The study highlights the role of AI in supporting dynamic and personalized mobility solutions, sustainable urban mobility planning, and the development of intelligent transportation systems.
Researchers introduced the Lightweight Hybrid Vision Transformer (LH-ViT) network for radar-based Human Activity Recognition (HAR). LH-ViT combines convolution operations with self-attention, utilizing a Residual Squeeze-and-Excitation (RES-SE) block to reduce computational load. Experimental results on two human activity datasets demonstrated LH-ViT's advantages in expressiveness and computing efficiency over traditional approaches.
Researchers have introduced an innovative IoT-based system for recognizing negative emotions, such as disgust, fear, and sadness, using multimodal biosignal data from wearable devices. This system combines EEG signals and physiological data from a smart band, processed through machine learning, to achieve high accuracy in emotion recognition.
Researchers have introduced FACTCHD, a framework for detecting fact-conflicting hallucinations in large language models (LLMs). They developed a benchmark that provides interpretable data for evaluating the factual accuracy of LLM-generated responses and introduced the TRUTH-TRIANGULATOR framework to enhance hallucination detection.
Researchers explored the application of distributed learning, particularly Federated Learning (FL), for Internet of Things (IoT) services in the context of emerging 6G networks. They discussed the advantages and challenges of distributed learning in IoT domains, emphasizing its potential for enhancing IoT services while addressing privacy concerns and the need for ongoing research in areas such as security and communication efficiency.
This review explores the landscape of social robotics research, addressing knowledge gaps and implications for business and management. It highlights the need for more studies on social robotic interactions in organizations, trust in human-robot relationships, and the impact of virtual social robots in the metaverse, emphasizing the importance of balancing technology integration with societal well-being.
This study investigates the role of social presence in shaping trust when collaborating with algorithms. The research reveals that the presence of others can enhance people's trust in algorithms, offering valuable insights into human-algorithm interactions and trust dynamics.
This study explores the development and usability of the AIIS (Artificial Intelligence, Innovation, and Society) collaborative learning interface, a metaverse-based educational platform designed for undergraduate students. The research demonstrates the potential of immersive technology in education and offers insights and recommendations for enhancing metaverse-based learning systems.
This research paper delves into the black box problem in clinical artificial intelligence (AI) and its implications for health professional-patient relationships. Drawing on African scholarship, the study highlights the importance of trust, transparency, and explainability in clinical AI to ensure ethical healthcare practices and genuine fiduciary relationships between healthcare professionals and patients.
This paper explores the increasing presence of autonomous artificial intelligence (AI) systems in healthcare and the associated concerns related to liability, regulatory compliance, and financial aspects. It discusses how evolving regulations, such as those from the FDA, aim to ensure transparency and accountability, and how payment models like Medicare Physician Fee Schedule (MPFS) are adapting to accommodate autonomous AI integration.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.