Deep Learning is a subset of machine learning that uses artificial neural networks with multiple layers (hence "deep") to model and understand complex patterns in datasets. It's particularly effective for tasks like image and speech recognition, natural language processing, and translation, and it's the technology behind many advanced AI systems.
Researchers harness convolutional neural networks (CNNs) to recognize Shen embroidery, achieving 98.45% accuracy. By employing transfer learning and enhancing MobileNet V1 with spatial pyramid pooling, they provide crucial technical support for safeguarding this cultural art form.
Researchers present an innovative ML-based approach, leveraging GANs for synthetic data generation and LSTM for temporal patterns, to tackle data scarcity and temporal dependencies in predictive maintenance. Despite challenges, their architecture achieves promising results, underlining AI's potential in enhancing maintenance practices.
Researchers introduced a multi-stage progressive detection method utilizing a Swin transformer to accurately identify water deficit in vertical greenery plants. By integrating classification, semantic segmentation, and object detection, the approach significantly improved detection accuracy compared to traditional methods like R-CNN and YOLO, offering promising solutions for urban greenery management.
In a recent paper published in Scientific Reports, researchers introduced a novel image denoising approach that combines dense block architectures and residual learning frameworks. The Sequential Residual Fusion Dense Network efficiently handles Gaussian and real-world noise by progressively integrating shallow and deep features, demonstrating superior performance across diverse datasets.
Researchers developed a real-time underwater video processing system leveraging object detection models and edge computing to count Nephrops in demersal trawl fisheries. Through meticulous experimentation, optimal configurations balancing processing speed and accuracy were identified, highlighting the potential for enhanced sustainability through informed catch monitoring.
Researchers introduced auto tiny classifiers, a methodology generating classifier circuits from tabular data, achieving high prediction accuracy with minimal hardware resources. These circuits, synthesized on flexible integrated circuits, outperformed conventional machine learning models in power consumption, size, and yield, offering promising applications in various domains.
Researchers introduce a novel method for edge detection in color images by integrating Support Vector Machine (SVM) with Social Spider Optimization (SSO) algorithms. The two-stage approach demonstrates superior accuracy and quality compared to existing methods, offering potential applications in various domains such as object detection and medical image analysis.
Researchers propose a solution for the Flexible Double Shop Scheduling Problem (FDSSP) by integrating a reinforcement learning (RL) algorithm with a Deep Temporal Difference Network (DTDN), achieving superior performance in minimizing makespan.
Researchers introduced Deep5HMC, a machine learning model combining advanced feature extraction techniques and deep neural networks to accurately detect 5-hydroxymethylcytosine (5HMC) in RNA samples. Deep5HMC surpassed previous methods, offering promise for early disease diagnosis, particularly in conditions like cancer and cardiovascular disease, by efficiently identifying RNA modifications.
Researchers introduced a deep convolutional neural network (DCNN) model for accurately detecting and classifying grape leaf diseases. Leveraging a dataset of grape leaf images, the DCNN model outperformed conventional CNN models, demonstrating superior accuracy and reliability in identifying black rot, ESCA, leaf blight, and healthy specimens.
Researchers introduced SCB-YOLOv5, integrating ShuffleNet V2 and convolutional block attention modules (CBAM) into YOLOv5 for detecting standardized gymnast movements. SCB-YOLOv5 showed enhanced precision, recall, and mean average precision (mAP), making it effective for on-site athlete action detection. Extensive experiments validated its effectiveness, highlighting its potential for practical sports education in resource-limited settings.
Researchers introduced RST-Net, a novel deep learning model for plant disease prediction, combining residual convolutional networks and Swin transformers. Testing on a benchmark dataset showed superior performance over state-of-the-art models, with potential applications in smart agriculture and precision farming.
This study explores the transformative impact of deep learning (DL) techniques on computer-assisted interventions and post-operative surgical video analysis, focusing on cataract surgery. By leveraging large-scale datasets and annotations, researchers developed DL-powered methodologies for surgical scene understanding and phase recognition.
Researchers introduce LCEFormer, a novel approach for remote sensing image dehazing, integrating CNN-based local context enrichment with transformer networks. Experiments on multiple datasets demonstrate state-of-the-art performance, surpassing existing methods in hazy scene restoration.
Researchers introduced OCTDL, an open-access dataset comprising over 2000 labeled OCT images of retinal diseases, including AMD, DME, and others. Utilizing high-resolution OCT scans obtained from an Optovue Avanti RTVue XR system, the dataset facilitated the development of deep learning models for disease classification. Validation with VGG16 and ResNet50 architectures demonstrated high performance, indicating OCTDL's potential for advancing automatic processing and early disease detection in ophthalmology.
Researchers developed a deep neural network (DNN) ensemble to automatically detect and classify epiretinal membranes (ERMs) in optical coherence tomography (OCT) scans of the macula. Leveraging over 11,000 images, the ensemble achieved high accuracy, particularly in identifying small ERMs, aided by techniques like mixup for data augmentation and t-stochastic neighborhood embeddings (t-SNE) for dimensional reduction.
Researchers developed a novel AI method, P-GAN, to improve the visualization of retinal pigment epithelial (RPE) cells using adaptive optics optical coherence tomography (AO-OCT). By transforming single noisy images into detailed representations of RPE cells, this approach enhances contrast and reduces imaging time, potentially revolutionizing ophthalmic diagnostics and personalized treatment strategies for retinal conditions.
The paper explores human action recognition (HAR) methods, emphasizing the transition to deep learning (DL) and computer vision (CV). It discusses the evolution of techniques, including the significance of large datasets and the emergence of HARNet, a DL architecture merging recurrent and convolutional neural networks (CNN).
Scholars utilized machine learning techniques to analyze instances of sexual harassment in Middle Eastern literature, employing lexicon-based sentiment analysis and deep learning architectures. The study identified physical and non-physical harassment occurrences, highlighting their prevalence in Anglophone novels set in the region.
Researchers explored the integration of artificial intelligence (AI) and machine learning (ML) in two-phase heat transfer research, focusing on boiling and condensation phenomena. AI was utilized for meta-analysis, physical feature extraction, and data stream analysis, offering new insights and solutions to predict multi-phase flow patterns. Interdisciplinary collaboration and sustainable cyberinfrastructures were emphasized for future advancements in thermal management systems and energy conversion devices.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.