Feature extraction is a process in machine learning where relevant and informative features are selected or extracted from raw data. It involves transforming the input data into a more compact representation that captures the essential characteristics for a particular task. Feature extraction is often performed to reduce the dimensionality of the data, remove noise, and highlight relevant patterns, improving the performance and efficiency of machine learning models. Techniques such as Principal Component Analysis (PCA), wavelet transforms, and deep learning-based methods can be used for feature extraction.
This article in Nature features a groundbreaking approach for monitoring marine life behavior using Lite3D, a lightweight deep learning model. The real-time anomalous behavior recognition system, focusing on cobia and tilapia, outperforms traditional and AI-based methods, offering precision, speed, and efficiency. Lite3D's application in marine conservation holds promise for monitoring and protecting underwater ecosystems impacted by global warming and pollution.
This study introduces an innovative framework for early plant disease diagnosis, leveraging fog computing, IoT sensor technology, and a novel GWO algorithm. The hybrid approach, incorporating deep learning models like AlexNet and GoogleNet, coupled with modified GWO for feature selection, demonstrates superior performance in plant disease identification.
The paper published in the journal Electronics explores the crucial role of Artificial Intelligence (AI) and Explainable AI (XAI) in Visual Quality Assurance (VQA) within manufacturing. While AI-based Visual Quality Control (VQC) systems are prevalent in defect detection, the study advocates for broader applications of VQA practices and increased utilization of XAI to enhance transparency and interpretability, ultimately improving decision-making and quality assurance in the industry.
This study addresses the simulation mis-specification problem in population genetics by introducing domain-adaptive deep learning techniques. The researchers reframed the issue as an unsupervised domain adaptation problem, effectively improving the performance of population genetic inference models, such as SIA and ReLERNN, when faced with real data that deviates from simulation assumptions.
Researchers introduced a groundbreaking hybrid model for short text filtering that combines an Artificial Neural Network (ANN) for new word weighting and a Hidden Markov Model (HMM) for accurate and efficient classification. The model excels in handling new words and informal language in short texts, outperforming other machine learning algorithms and demonstrating a promising balance between accuracy and speed, making it a valuable tool for real-world short text filtering applications.
Researchers reviewed the application of machine learning (ML) techniques to bolster the cybersecurity of industrial control systems (ICSs). ML plays a vital role in detecting and mitigating cyber threats within ICSs, encompassing supervised and unsupervised approaches, and can be integrated into intrusion detection systems (IDS) for improved outcomes.
This study, published in Nature, explores the application of Convolutional Neural Networks (CNN) to identify and detect diseases in cauliflower crops. By using advanced deep-learning models and extensive image datasets, the research achieved high accuracy in disease classification, offering the potential to enhance agricultural efficiency and ensure food security.
Researchers have improved inkjet print head monitoring in digital manufacturing by employing machine learning algorithms to classify nozzle jetting conditions based on self-sensing signals, achieving over 99.6% accuracy. This approach offers real-time detection of faulty nozzle behavior, ensuring the quality of printed products and contributing to the efficiency of digital manufacturing processes.
Researchers introduced the Lightweight Hybrid Vision Transformer (LH-ViT) network for radar-based Human Activity Recognition (HAR). LH-ViT combines convolution operations with self-attention, utilizing a Residual Squeeze-and-Excitation (RES-SE) block to reduce computational load. Experimental results on two human activity datasets demonstrated LH-ViT's advantages in expressiveness and computing efficiency over traditional approaches.
Researchers have introduced a novel self-supervised learning framework to improve underwater acoustic target recognition models, addressing the challenges of limited labeled samples and abundant unlabeled data. The four-stage learning framework, including semi-supervised fine-tuning, leverages advanced self-supervised learning techniques, resulting in significant improvements in model accuracy, especially under few-shot conditions.
This research presents an innovative method called TF2 for generating synchronized talking face videos driven by speech audio. The system utilizes generative adversarial networks (GANs) and a Multi-level Wavelet Transform (MWT) to transform speech audio into different frequency domains, improving the realism of the generated video frames.
This study explores the application of artificial intelligence (AI) models for indoor fire prediction, specifically focusing on temperature, carbon monoxide (CO) concentration, and visibility. The research employs computational fluid dynamics (CFD) simulations and deep learning algorithms, including Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Transpose Convolution Neural Network (TCNN).
This review explores the applications of artificial intelligence (AI) in studying fishing fleet (FV) behavior, emphasizing the role of AI in monitoring and managing fisheries. The paper discusses data sources for FV behavior research, AI techniques used in monitoring FV behavior, and the uses of AI in identifying vessel types, forecasting fishery resources, and analyzing fishing density.
Researchers have introduced a lightweight yet efficient safety helmet detection model, SHDet, based on the YOLOv5 architecture. This model optimizes the YOLOv5 backbone, incorporates upsampling and attention mechanisms, and achieves impressive performance with faster inference speeds, making it a promising solution for real-world applications on construction sites.
Researchers have harnessed the power of Vision Transformers (ViT) to revolutionize fashion image classification and recommendation systems. Their ViT-based models outperformed CNN and pre-trained models, achieving impressive accuracy in classifying fashion images and providing efficient and accurate recommendations, showcasing the potential of ViTs in the fashion industry.
The paper introduces the ODEL-YOLOv5s model, designed to address the challenges of obstacle detection in coal mines using deep learning target detection algorithms. This model improves detection accuracy, real-time responsiveness, and safety for driverless electric locomotives in the challenging coal mine environment. It outperforms other target detection algorithms, making it a promising solution for obstacle identification in coal mines.
Researchers have developed an enhanced YOLOv8 model for detecting wildfire smoke using images captured by unmanned aerial vehicles (UAVs). This approach improves accuracy in various weather conditions and offers a promising solution for early wildfire detection and monitoring in complex forest environments.
Researchers introduce a groundbreaking object tracking algorithm, combining Siamese networks and CNN-based methods, achieving high precision and success scores in benchmark datasets. This innovation holds promise for various applications in computer vision, including autonomous driving and surveillance.
Researchers have developed a comprehensive approach to improving ship detection in synthetic aperture radar (SAR) images using machine learning and artificial intelligence. By selecting relevant papers, identifying key features, and employing the graph theory matrix approach (GTMA) for ranking methods, this research provides a robust framework for enhancing maritime operations and security through more accurate ship detection in challenging sea conditions and weather.
Researchers have developed a "semantic guidance network" to improve video captioning by addressing challenges like redundancy and omission of information in existing methods. The approach incorporates techniques for adaptive keyframe sampling, global encoding, and similarity-based optimization, resulting in improved accuracy and generalization on benchmark datasets. This work opens up possibilities for various applications, including video content search and assistance for visually impaired users.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.