Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
This study introduces a deep learning-based Motor Assessment Model (MAM) designed to automate General Movement Assessment (GMA) in infants, predicting the risk of cerebral palsy (CP). The MAM, utilizing 3D pose estimation and Transformer architecture, demonstrated high accuracy, sensitivity, and specificity in identifying fidgety movements, essential for CP risk assessment. With interpretability, the model aids GMA beginners and holds promise for streamlined, accessible, and early CP screening, potentially transforming video-based diagnostics for infant motor abnormalities.
This paper emphasizes the crucial role of machine learning (ML) in detecting and combating fake news amid the proliferation of misinformation on social media. The study reviews various ML techniques, including deep learning, natural language processing (NLP), ensemble learning, transfer learning, and graph-based approaches, highlighting their strengths and limitations in fake news detection. The researchers advocate for a multifaceted strategy, combining different techniques and optimizing computational strategies to address the complex challenges of identifying misinformation in the digital age.
This article covers breakthroughs and innovations in natural language processing, computer vision, and data security. From addressing logical reasoning challenges with the discourse graph attention network to advancements in text classification using BERT models, lightweight mask detection in computer vision, sports analytics employing network graph theory, and data security through image steganography, the authors showcase the broad impact of AI across various domains.
This study introduces a Digital Twin (DT)-centered Fire Safety Management (FSM) framework for smart buildings. Harnessing technologies like AI, IoT, AR, and BIM, the framework enhances decision-making, real-time information access, and FSM efficiency. Evaluation by Facility Management professionals affirms its effectiveness, with a majority expressing confidence in its clarity, data security, and utility for fire evacuation planning and Fire Safety Equipment (FSE) maintenance.
Researchers introduced Swin-APT, a deep learning-based model for semantic segmentation and object detection in Intelligent Transportation Systems (ITSs). The model, incorporating a Swin-Transformer-based lightweight network and a multiscale adapter network, demonstrated superior performance in road segmentation and marking detection tasks, outperforming existing models on various datasets, including achieving a remarkable 91.2% mIoU on the BDD100K dataset.
LlamaGuard, a safety-focused LLM model, employs a robust safety risk taxonomy for content moderation in human-AI conversations. Leveraging fine-tuning and instruction-following frameworks, it excels in adaptability, outperforming existing tools on internal and public datasets. LlamaGuard's versatility positions it as a strong baseline for content moderation, showcasing superior overall performance and efficiency in handling diverse taxonomies with minimal retraining efforts.
Researchers from Nanjing University of Science and Technology present a novel scheme, Spatial Variation-Dependent Verification (SVV), utilizing convolutional neural networks and textural features for handwriting identification and verification. The scheme outperforms existing methods, achieving 95.587% accuracy, providing a robust solution for secure handwriting recognition and authentication in diverse applications, including security, forensics, banking, education, and healthcare.
The CYBERSECEVAL benchmark addresses cybersecurity risks in Large Language Models (LLMs) used for coding support. The evaluation, involving seven models, highlights significant concerns, revealing a 30% occurrence of insecure code suggestions and a 53% compliance rate in aiding cyberattacks. This benchmark emphasizes the critical need to integrate security considerations in LLM development, providing a robust framework for ongoing research to enhance AI safety in the context of evolving LLM usage.
Researchers employ deep neural networks and machine learning to predict facial landmarks and pain scores in cats using the Feline Grimace Scale. The study demonstrates advanced CNN models accurately predicting facial landmarks and an XGBoost model achieving high accuracy in discerning painful and non-painful cats. This breakthrough paves the way for an automated smartphone application, addressing the challenge of non-verbal pain assessment in felines and marking a significant advancement in veterinary care.
Researchers from China introduce the SZU-EmoDage dataset, a pioneering facial dataset crafted with StyleGAN, featuring Chinese individuals of diverse ages and expressions. This innovative dataset, validated for authenticity by human raters, surpasses existing ones, offering applications in cross-cultural emotion studies and advancements in facial perception technology. The study emphasizes the dataset's value in exploring cognitive processes, detecting disorders, and enhancing technologies like face recognition and animation.
Researchers present a groundbreaking privacy-preserving dialogue model framework, integrating Fully Homomorphic Encryption (FHE) with dynamic sparse attention (DSA). This innovative approach enhances efficiency and accuracy in dialogue systems while prioritizing user privacy. Experimental analyses demonstrate significant improvements in precision, recall, accuracy, and latency, positioning the proposed framework as a powerful solution for secure natural language processing tasks in the information era.
Researchers from Meta present Audiobox, a novel model integrating flow-matching techniques for controllable and versatile audio generation. Audiobox demonstrates unprecedented controllability across various audio modalities, such as speech and sound, addressing limitations in existing generative models. The proposed Joint-CLAP evaluation metric correlates strongly with human judgment, showcasing Audiobox's potential for transformative applications in podcasting, movies, ads, and audiobooks.
Researchers detail a groundbreaking approach for creating realistic train-and-test datasets to evaluate machine learning models in software bug assignments. The novel method, based on time dependencies, addresses limitations in existing techniques, ensuring more reliable assessments in real-world scenarios. The proposed method offers potential applications in telecommunication, software quality prediction, and maintenance, contributing to the development of bug-free software applications.
The article presents a groundbreaking approach for identifying sandflies, crucial vectors for various pathogens, using Wing Interferential Patterns (WIPs) and deep learning. Traditional methods are laborious, and this non-invasive technique offers efficient sandfly taxonomy, especially under field conditions. The study demonstrates exceptional accuracy in taxonomic classification at various levels, showcasing the potential of WIPs and deep learning for advancing entomological surveys in medical vector identification.
This paper delves into the transformative impact of machine learning (ML) in scientific research while highlighting critical challenges, particularly in COVID-19 diagnostics using AI-driven algorithms. The study underscores concerns about misleading claims, flawed methodologies, and the need for standardized guidelines to ensure credibility and reproducibility. It addresses issues such as data leakage, inadequate reporting, and overstatement of findings, emphasizing the importance of proper training and standardized methodologies in the rapidly evolving field of health-related ML.
This article introduces an AI-based solution for real-time detection of safety helmets and face masks on municipal construction sites. The enhanced YOLOv5s model, leveraging ShuffleNetv2 and ECA mechanisms, demonstrates a 4.3% increase in mean Average Precision with significant resource savings. The study emphasizes the potential of AI-powered systems to improve worker safety, reduce accidents, and enhance efficiency in urban construction projects.
This research explores Unique Feature Memorization (UFM) in deep neural networks (DNNs) trained for image classification tasks, where networks memorize specific features occurring only once in a single sample. The study introduces methods, including the M score, to measure and identify UFM, highlighting its privacy implications and potential risks for model robustness. The findings emphasize the need for mitigation strategies to address UFM and enhance the privacy and generalization of DNNs, especially in fields like medical imaging and computer vision.
This research, published in PLOS One, investigates the protective feature preferences of the adult Danish population in various AI decision-making scenarios. With a focus on both public and commercial sectors, the study explores the nuanced interplay of demographic factors, societal expectations, and trust in shaping preferences for features such as AI knowledge, human responsibility, non-discrimination, human explainability, and system performance.
This article proposes the Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy (GREAT PLEA) ethical principles for generative AI applications in healthcare. Drawing inspiration from existing military and healthcare ethical principles, the GREAT PLEA framework aims to address ethical concerns, protect clinicians and patients, and guide the responsible development and implementation of generative AI in healthcare settings.
A groundbreaking study introduces the IGP-UHM AI v1.0 model, utilizing deep learning and XAI to enhance El Niño-Southern Oscillation (ENSO) prediction. The 2023–2024 forecast reveals sustained yet weakening EN conditions, emphasizing the model's credibility through Layerwise Relevance Propagation (LRP) explanations. The research underscores the need for ongoing refinement, human oversight, and raises crucial questions about ENSO predictability limits in the context of climate change.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.