Feature extraction is a process in machine learning where relevant and informative features are selected or extracted from raw data. It involves transforming the input data into a more compact representation that captures the essential characteristics for a particular task. Feature extraction is often performed to reduce the dimensionality of the data, remove noise, and highlight relevant patterns, improving the performance and efficiency of machine learning models. Techniques such as Principal Component Analysis (PCA), wavelet transforms, and deep learning-based methods can be used for feature extraction.
Researchers leverage synchrotron X-ray imaging and machine learning models, including deep convolutional neural networks (ConvNets) and semantic segmentation, to predict laser absorptance and segment vapor depressions in metal additive manufacturing. The end-to-end and modular approaches showcase efficient and interpretable solutions, offering potential for real-time monitoring and decision-making in industrial processes.
Researchers have unveiled innovative methods, utilizing lidar data and AI techniques, to precisely delineate river channels' bankfull extents. This groundbreaking approach streamlines large-scale topographic analyses, offering efficiency in flood risk mapping, stream rehabilitation, and tracking channel evolution, marking a significant leap in environmental mapping workflows.
This study proposes an innovative approach to enhance road safety by introducing a CNN-LSTM model for driver sleepiness detection. Combining facial movement analysis and deep learning, the model outperforms existing methods, achieving over 98% accuracy in real-world scenarios, paving the way for effective implementation in smart vehicles to proactively prevent accidents caused by driver fatigue.
This paper unveils the Elderly and Visually Impaired Human Activity Monitoring (EV-HAM) system, a pioneering solution utilizing artificial intelligence, digital twins, and Wi-Sense for accurate activity recognition. Employing Deep Hybrid Convolutional Neural Networks on Wi-Fi Channel State Information data, the system achieves a remarkable 99% accuracy in identifying micro-Doppler fingerprints of activities, presenting a revolutionary advancement in elderly and visually impaired care through continuous monitoring and crisis intervention.
This paper emphasizes the crucial role of machine learning (ML) in detecting and combating fake news amid the proliferation of misinformation on social media. The study reviews various ML techniques, including deep learning, natural language processing (NLP), ensemble learning, transfer learning, and graph-based approaches, highlighting their strengths and limitations in fake news detection. The researchers advocate for a multifaceted strategy, combining different techniques and optimizing computational strategies to address the complex challenges of identifying misinformation in the digital age.
Researchers introduce an innovative weed detection solution for rice fields. Utilizing YOLOX technology, particularly the YOLOX-tiny model, the approach outshines competitors, promising accurate herbicide application by agricultural robots during the vulnerable rice seedling stage. The breakthrough addresses challenges in weed control, marking a significant advancement in precision agriculture.
Researchers present G-YOLOv5s-SS, a novel lightweight architecture based on YOLOv5 for efficient detection of sugarcane stem nodes. Achieving high accuracy (97.6% AP) with reduced model size, parameters, and FLOPs, this algorithm holds promise for advancing mechanized sugarcane cultivation, addressing challenges in seed cutting efficiency and offering potential applications in broader agricultural tasks.
Researchers introduce a novel multi-task learning approach for recognizing low-resolution text in logistics, addressing challenges in the rapidly growing e-commerce sector. The proposed model, incorporating a super-resolution branch and attention-based decoding, outperforms existing methods, offering substantial accuracy improvements for handling distorted, low-resolution Chinese text.
Researchers from Nanjing University of Science and Technology present a novel scheme, Spatial Variation-Dependent Verification (SVV), utilizing convolutional neural networks and textural features for handwriting identification and verification. The scheme outperforms existing methods, achieving 95.587% accuracy, providing a robust solution for secure handwriting recognition and authentication in diverse applications, including security, forensics, banking, education, and healthcare.
The article presents a groundbreaking approach for identifying sandflies, crucial vectors for various pathogens, using Wing Interferential Patterns (WIPs) and deep learning. Traditional methods are laborious, and this non-invasive technique offers efficient sandfly taxonomy, especially under field conditions. The study demonstrates exceptional accuracy in taxonomic classification at various levels, showcasing the potential of WIPs and deep learning for advancing entomological surveys in medical vector identification.
This research introduces FakeStack, a powerful deep learning model combining BERT embeddings, Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM) for accurate fake news detection. Trained on diverse datasets, FakeStack outperforms benchmarks and alternative models across multiple metrics, demonstrating its efficacy in combating false news impact on public opinion.
Utilizing machine learning, a PLOS One study delves into the correlation between Japanese TV drama success and various metadata, including facial features extracted from posters. Analyzing 800 dramas from 2003 to 2020, the study reveals the impact of factors like genre, cast, and broadcast details on ratings, emphasizing the unexpected significance of facial information in predicting success.
Researchers developed a cutting-edge robot welding guidance system, integrating an enhanced YOLOv5 algorithm with a RealSense Depth Camera. Overcoming limitations of traditional sensors, the system enables precise weld groove detection, enhancing welding robot autonomy in complex industrial environments. The experiment showcased superior accuracy, reaching 90.8% mean average precision, and real-time performance at 20 FPS, marking a significant stride in welding automation and precision.
Researchers propose an innovative fault monitoring approach for high-voltage circuit breakers, utilizing a specialized device and deep learning techniques. The unsupervised deep learning method showcases over 95% accuracy in fault diagnosis, outperforming traditional algorithms in feature extraction and computation speed. The study suggests a practical and efficient solution for real-time fault monitoring, holding promise for enhancing reliability in high-voltage systems.
Researchers unveil a pioneering method for accurately estimating food weight using advanced boosting regression algorithms trained on a vast Mediterranean cuisine image dataset. Achieving remarkable accuracy with a mean weight absolute error of 3.93 g, this innovative approach addresses challenges in dietary monitoring and offers a promising solution for diverse food types and shapes.
A groundbreaking study from Kyoto Prefectural University of Medicine introduces an advanced AI system leveraging deep neural networks and CT scans to objectively and accurately determine the biological sex of deceased individuals based on skull morphology. Outperforming human experts, this innovative approach promises to enhance forensic identification accuracy, addressing challenges in reliability and objectivity within traditional methods.
Researchers emphasize the growing significance of radar-based human activity recognition (HAR) in safety and surveillance, highlighting its advantages over vision-based sensing in challenging conditions. The study reviews classical Machine Learning (ML) and Deep Learning (DL) approaches, with DL's advantage in avoiding manual feature extraction and ML's robust empirical basis. A comparative study on benchmark datasets evaluates performance and computational efficiency, aiming to establish a standardized assessment framework for radar-based HAR techniques.
The paper explores recent advancements and future applications in robotics and artificial intelligence (AI), emphasizing spatial and visual perception enhancement alongside reasoning. Noteworthy studies include the development of a knowledge distillation framework for improved glioma segmentation, a parallel platform for robotic control, a method for discriminating neutron and gamma-ray pulse shapes, HDRFormer for high dynamic range (HDR) image quality improvement, a unique binocular endoscope calibration algorithm, and a tensor sparse dictionary learning-based dose image reconstruction method.
Researchers introduce an innovative approach for speech-emotion analysis employing a multi-stage process involving spectro-temporal modulation, entropy features, convolutional neural networks, and a combined GC-ECOC classification model. Evaluating against Berlin and ShEMO datasets, the method showcases remarkable performance, achieving average accuracies of 93.33% and 85.73%, respectively, surpassing existing methods by at least 2.1% in accuracy and showing significant potential for improved emotion recognition in speech across various applications.
Researchers introduce a pioneering framework leveraging IoT and wearable technology to enhance the adaptability of AR glasses in the aviation industry. The multi-modal data processing system, employing kernel theory-based design and machine learning, classifies performance, offering a dynamic and adaptive approach for tailored AR information provision.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.