Deep Learning is a subset of machine learning that uses artificial neural networks with multiple layers (hence "deep") to model and understand complex patterns in datasets. It's particularly effective for tasks like image and speech recognition, natural language processing, and translation, and it's the technology behind many advanced AI systems.
AI predicts energy expenses from passive design, offering a tool for reducing the energy burden on low-income households and advancing energy justice.
Researchers from the University of Ostrava delve into the intricate landscape of AI's societal implications, emphasizing the need for ethical regulations and democratic values alignment. Through interdisciplinary analysis and policy evaluation, they advocate for transparent, participatory AI deployment, fostering societal welfare while addressing inequalities and safeguarding human rights.
Researchers present a hybrid recommendation system for virtual learning environments, employing bi-directional long short-term memory (BiLSTM) networks to capture users' evolving interests. Achieving remarkable accuracy and low loss, the system outperforms existing methods by integrating attention mechanisms and compression algorithms, offering personalized resource suggestions based on both short-term and long-term user behaviors.
Researchers propose a groundbreaking framework utilizing social media data and deep learning techniques to assess urban park management effectively. By analyzing visitor comments on seven parks in Wuhan City, the study evaluated various management aspects and identified improvement suggestions, demonstrating the potential of this approach to enhance park service quality and management efficiency. The framework's dynamic visualization capabilities and scalability make it a valuable tool for improving public spaces and contributing to the development of smart cities, with opportunities for expansion to other urban areas and data sources in future research.
The article discusses the application of autoencoder neural networks in archaeometry, specifically in reducing the dimensions of X-ray fluorescence spectra for analyzing cultural heritage objects. Researchers utilized autoencoders to compress data and extract essential features, facilitating efficient analysis of elemental composition in painted materials. Results demonstrated the effectiveness of this approach in attributing paintings to different creation periods based on pigment composition, highlighting its potential for automating and enhancing archaeological analyses.
Researchers introduce a lightweight enhancement to the YOLOv5 algorithm for vehicle detection, integrating integrated perceptual attention (IPA) and multiscale spatial channel reconstruction (MSCCR) modules. The method reduces model parameters while boosting accuracy, making it optimal for intelligent traffic management systems. Experimental results showcase superior performance compared to existing algorithms, promising advancements in efficiency and functionality for vehicle detection in diverse traffic environments.
Researchers employed deep convolutional neural networks (CNNs) to denoise X-ray diffraction and resonant X-ray scattering data, overcoming challenges in structural analysis caused by experimental noise. By training CNNs with experimental data, they achieved remarkable accuracy in preserving structural features while removing noise, demonstrating the effectiveness of computational methods in advancing materials science research.
Researchers addressed challenges in Federated Learning (FL) within Space-Air-Ground Information Networks (SAGIN) by introducing the LCNSFL algorithm. LCNSFL, based on a Double Deep Q Network (DDQN), strategically selects nodes to minimize time and energy costs. Simulation results demonstrate LCNSFL's superiority over traditional methods, offering efficient convergence and resource utilization in dynamic network environments, essential for practical applications in SAGIN.
Researchers present a pioneering method for identifying Aedes mosquito species solely from wing images using convolutional neural networks (CNNs). By leveraging the standardized morphology of wings and a shallow CNN architecture, the study achieved remarkable precision and sensitivity, offering a cost-effective and efficient solution for mosquito species differentiation crucial in disease control efforts.
Researchers unveil an upgraded version of MobileNetV2 tailored for agricultural product recognition, revolutionizing farming practices through precise identification and classification. By integrating novel Res-Inception and efficient multi-scale cross-space learning modules, the enhanced model exhibits substantial accuracy improvements, offering promising prospects for optimizing production efficiency and economic value in agriculture.
Researchers propose a novel approach utilizing ChatGPT and artificial bee colony (ABC) algorithms to advance low-carbon transformation in resource-based cities. Their study demonstrates significant improvements in energy efficiency, carbon emissions reduction, and traffic congestion alleviation, highlighting the potential of these methods in promoting green development and sustainable urban planning.
Researchers present an innovative upper-limb exoskeleton system leveraging deep learning (DL) to predict and enhance human strength. Integrating soft wearable sensors and cloud-based DL, the system achieves a remarkable 96.2% accuracy in real-time motion prediction, significantly reducing muscle activities by 3.7 times on average. This user-friendly solution addresses age and stroke-related strength decline, marking a transformative leap in robotic exoskeleton technology for assisting individuals with neuromotor disorders in daily tasks.
Researchers propose SmartMuraDetection, a novel organic light emitting diode (OLED) defect detection method based on small-sample deep learning (DL), targeting mura defects. Utilizing gradient edge linear stretching for preprocessing and a TinyDetection model for small-scale target detection, the method achieves a high accuracy of 96% in point mura defect detection, surpassing previous approaches. While effective for point mura defects, further research is needed to address limitations in detecting other types of mura defects.
This study presents a novel approach to landslide prediction by incorporating full seismic waveform data into a deep learning model. By leveraging a modified transformer neural network and synthetic waveforms from the 2015 Gorkha earthquake in Nepal, the researchers demonstrated significant improvements over traditional models that rely solely on scalar intensity parameters. Their findings highlight the importance of considering waveform characteristics and spatial distribution for more accurate landslide risk assessment during earthquakes, offering valuable insights for disaster risk reduction efforts.
"npj Digital Medicine" presents a scoping review on AI applications in home-based virtual rehabilitation (VRehab), showing its effectiveness in stroke, cardiac, and orthopedic rehabilitation. AI-driven VRehab offers personalized feedback, enhances patient outcomes, and overcomes barriers to traditional rehabilitation, heralding a new era in accessible and efficient healthcare delivery. Further research is needed to standardize evaluation methods and ensure privacy while maximizing the potential of AI in personalized rehabilitation programs.
Researchers unveil a novel workflow employing deep learning and machine learning techniques to assess the vulnerability of East Antarctic vegetation to climate change. Utilizing high-resolution multispectral imagery from UAVs, XGBoost and U-Net classifiers demonstrate robust performance, highlighting the transformative potential of combining UAV technology and ML for non-invasive monitoring in polar ecosystems. Future research should focus on expanding training data and exploring other ML algorithms to enhance segmentation outcomes, furthering our understanding of Antarctic vegetation dynamics amid environmental challenges.
Researchers present a remote access server system leveraging image processing and deep learning to classify coffee grinder burr wear accurately. With over 96% accuracy, this mobile-friendly service streamlines assessment, benefiting both commercial coffee chains and enthusiasts, while its practicality and low cost suggest broader applications in machinery wear prediction.
Researchers from the UK, Ethiopia, and India have developed an innovative robotic harvesting system that employs deep learning and computer vision techniques to recognize and grasp fruits. Tested in both indoor and outdoor environments, the system showcased promising accuracy and efficiency, offering a potential solution to the labor-intensive task of fruit harvesting in agriculture. With its adaptability to various fruit types and environments, this system holds promise for enhancing productivity and quality in fruit harvesting operations, paving the way for precision agriculture advancements.
Researchers from Egypt introduce a groundbreaking system for Human Activity Recognition (HAR) using Wireless Body Area Sensor Networks (WBANs) and Deep Learning. Their innovative approach, combining feature extraction techniques and Convolutional Neural Networks (CNNs), achieves exceptional accuracy in identifying various activities, promising transformative applications in healthcare, sports, and elderly care.
Researchers present the YOLOX classification model, aimed at accurately identifying and classifying tea buds with similar characteristics, crucial for optimizing tea production processes. Through comprehensive comparison experiments, the YOLOX algorithm emerged as the top performer, showcasing its potential for enabling mechanically intelligent tea picking and addressing challenges in the tea industry.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.