A Convolutional Neural Network (CNN) is a type of deep learning algorithm primarily used for image processing, video analysis, and natural language processing. It uses convolutional layers with sliding windows to process data, and is particularly effective at identifying spatial hierarchies or patterns within data, making it excellent for tasks like image and speech recognition.
This article delves into the use of deep convolutional neural networks (DCNN) to detect and differentiate synthetic cannabinoids based on attenuated total reflectance Fourier-transform infrared (ATR-FTIR) spectra. The study demonstrates the effectiveness of DCNN models, including a vision transformer-based approach, in classifying and distinguishing synthetic cannabinoids, offering promising applications for drug identification and beyond.
Researchers introduce a groundbreaking object tracking algorithm, combining Siamese networks and CNN-based methods, achieving high precision and success scores in benchmark datasets. This innovation holds promise for various applications in computer vision, including autonomous driving and surveillance.
This study investigates the impact of cross-validation methods on the diagnostic performance of deep-learning-based computer-aided diagnosis (CAD) systems using augmented neuroimaging data. Using EEG data from post-traumatic stress disorder patients and controls, the researchers found that data augmentation improved performance.
Researchers introduce the UIBVFEDPlus-Light database, an extension of the UIBVFED virtual facial expression dataset, to explore the critical impact of lighting conditions on automatic human expression recognition. The database includes 100 virtual characters expressing 33 distinct emotions under four lighting setups.
Explore the cutting-edge advancements in image processing through reinforcement learning and deep learning, promising enhanced accuracy and real-world applications, while acknowledging the challenges that lie ahead for these transformative technologies.
Researchers present MGB-YOLO, an advanced deep learning model designed for real-time road manhole cover detection. Through a combination of MobileNet-V3, GAM, and BottleneckCSP, this model offers superior precision and computational efficiency compared to existing methods, with promising applications in traffic safety and infrastructure maintenance.
Researchers introduce Espresso, a deep-learning model for global precipitation estimation using geostationary satellite input and calibrated with Global Precipitation Measurement Core Observatory (GPMCO) data. Espresso outperforms other products in storm localization and intensity estimation, making it an operational tool at Meteo-France for real-time global precipitation estimates every 30 minutes, with potential for further improvement in higher latitudes.
Researchers have leveraged machine learning and deep learning techniques, including BiLSTM networks, to classify maize gene expression profiles under biotic stress conditions. The study's findings not only demonstrate the superior performance of the BiLSTM model but also identify key genes related to plant defense mechanisms, offering valuable insights for genomics research and applications in developing disease-resistant maize varieties.
Researchers have developed a novel method that combines geospatial artificial intelligence (GeoAI) with satellite imagery to predict soil physical properties such as clay, sand, and silt. They utilized a hybrid CNN-RF model and various environmental parameters to achieve accurate predictions, which have significant implications for agriculture, erosion control, and environmental monitoring.
Researchers explore the use of a two-stage detector based on Faster R-CNN for precise and real-time Personal Protective Equipment (PPE) detection in hazardous work environments. Their model outperforms YOLOv5, achieving 96% mAP50, improved precision, and reduced inference time, showcasing its potential for enhancing worker safety and compliance.
This article explores the emerging role of Artificial Intelligence (AI) in weather forecasting, discussing the use of foundation models and advanced techniques like transformers, self-supervised learning, and neural operators. While still in its early stages, AI promises to revolutionize weather and climate prediction, providing more accurate forecasts and deeper insights into climate change's effects.
This paper presents a novel approach to pupil tracking using event camera imaging, a technology known for its ability to capture rapid and subtle eye movements. The research employs machine-learning-based computer vision techniques to enhance eye tracking accuracy, particularly during fast eye movements.
Researchers introduce ClueCatcher, an innovative method for detecting deepfakes. By analyzing inconsistencies and disparities introduced during facial manipulation, ClueCatcher identifies subtle artifacts, achieving high accuracy and cross-dataset generalizability. This research addresses the growing threat of increasingly deceptive deepfakes and highlights the importance of automated detection methods that do not rely on human perception.
Researchers have developed a robust web-based malware detection system that utilizes deep learning, specifically a 1D-CNN architecture, to classify malware within portable executable (PE) files. This innovative approach not only showcases impressive accuracy but also bridges the gap between advanced malware detection technology and user accessibility through a user-friendly web interface.
Researchers have introduced a groundbreaking deep-learning model called the Convolutional Block Attention Module (CBAM) Spatio-Temporal Convolution Network-Transformer (CSTCN) to accurately predict mobile network traffic. By integrating temporal convolutional networks, attention mechanisms, and Transformers, the CSTCN-Transformer outperforms traditional models, offering potential benefits for resource allocation and network service quality enhancement.
Researchers have developed a novel approach that combines ResNet-based deep learning with Grad-CAM visualization to enhance the accuracy and interpretability of medical text processing. This innovative method provides valuable insights into AI model decision-making processes, making it a promising tool for improving healthcare diagnostics and decision support systems.
This study introduces an innovative framework for speech emotion recognition by utilizing dual-channel spectrograms and optimized deep features. The incorporation of a novel VTMel spectrogram, deep learning feature extraction, and dual-channel fusion significantly improves emotion recognition accuracy, offering valuable insights for applications in human-computer interaction, healthcare, education, and more.
Researchers developed a novel mobile user authentication system that uses motion sensors and deep learning to improve security on smart mobile devices in complex environments. By combining S-transform and singular value decomposition for data preprocessing and employing a semi-supervised Teacher-Student tri-training algorithm to reduce label noise, this approach achieved high accuracy and robustness in real-world scenarios, demonstrating its potential for enhancing mobile security.
This study introduces a novel spiking neural network (SNN) based model for predicting brain activity patterns in response to visual stimuli, addressing differences between artificial neural networks and biological neurons. The SNN approach outperforms traditional models, showcasing its potential for applications in neuroscience, bioengineering, and brain-computer interfaces.
Researchers propose a novel approach for accurate drug classification using a smartphone Raman spectrometer and a convolutional neural network (CNN). The system captures two-dimensional Raman spectral intensity maps and spectral barcodes of drugs, allowing the identification of chemical components and drug brand names.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.