Computer Vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. By using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects, and then react to what they "see."
Researchers compared traditional feature-based computer vision methods with CNN-based deep learning for weed classification in precision farming, emphasizing the former's effectiveness with smaller datasets
Researchers have introduced the human behavior detection dataset (HBDset) for computer vision applications in emergency evacuations, focusing on vulnerable groups like the elderly and disabled.
Researchers developed an automated system utilizing UAVs and deep learning to monitor and maintain remote gravel runways in Northern Canada. This system accurately detects defects and evaluates runway smoothness, proving more effective and reliable than traditional manual methods in harsh and isolated environments.
A systematic review in the journal Sensors analyzed 77 studies on facial and pose emotion recognition using deep learning, highlighting methods like CNNs and Vision Transformers. The review examined trends, datasets, and applications, providing insights into state-of-the-art techniques and their effectiveness in psychology, healthcare, and entertainment.
A comprehensive review highlights the evolution of object-tracking methods, sensors, and datasets in computer vision, guiding developers in selecting optimal tools for diverse applications.
Researchers have developed AI-based computer vision systems to identify growth-stunted salmon, with YoloV7 achieving the highest accuracy. This technology offers efficient and reliable monitoring, improving fish welfare and production in aquaculture.
Researchers from China have integrated computer vision (CV) and LiDAR technologies to improve the safety and efficiency of autonomous navigation in port channels. This innovative approach utilizes advanced path-planning and collision prediction algorithms to create a comprehensive perception of the port environment, significantly enhancing navigation safety and reducing collision risks.
Researchers demonstrated a novel approach to structural health monitoring (SHM) in seismic contexts, combining self-sensing concrete beams, vision-based crack assessment, and AI-based prediction models. The study showed that electrical impedance measurements and the AI-based Prophet model significantly improved the accuracy of load and crack predictions, offering a robust solution for real-time SHM and early warning systems.
Researchers introduced a groundbreaking silent speech interface (SSI) leveraging few-layer graphene (FLG) strain sensing technology and AI-based self-adaptation. Embedded into a biocompatible smart choker, the sensor achieved high accuracy and computational efficiency, revolutionizing communication in challenging environments.
In a recent article published in Sensors, researchers conducted a thorough review of motion capture technology (MCT) in sports, comparing and evaluating various systems including cinematography capture, electromagnetic capture, computer vision capture, and multimodal capture.
Researchers harness convolutional neural networks (CNNs) to recognize Shen embroidery, achieving 98.45% accuracy. By employing transfer learning and enhancing MobileNet V1 with spatial pyramid pooling, they provide crucial technical support for safeguarding this cultural art form.
Researchers introduced a multi-stage progressive detection method utilizing a Swin transformer to accurately identify water deficit in vertical greenery plants. By integrating classification, semantic segmentation, and object detection, the approach significantly improved detection accuracy compared to traditional methods like R-CNN and YOLO, offering promising solutions for urban greenery management.
Researchers present a groundbreaking study on the crystallization kinetics of (Ba,Ra)SO4 solid solutions, vital in subsurface energy applications. Leveraging microfluidic experiments coupled with computer vision techniques, they unveil crystal growth rates and morphologies, overcoming challenges posed by radium's radioactivity.
Researchers introduced a deep convolutional neural network (DCNN) model for accurately detecting and classifying grape leaf diseases. Leveraging a dataset of grape leaf images, the DCNN model outperformed conventional CNN models, demonstrating superior accuracy and reliability in identifying black rot, ESCA, leaf blight, and healthy specimens.
Researchers integrated gradient quantization (GQ) into DenseNet architecture to improve image recognition (IR). By optimizing feature reuse and introducing GQ for parallel training, they achieved superior accuracy and accelerated training speed, overcoming communication bottlenecks.
Researchers introduced enhancements to the YOLOv5 algorithm for real-time safety helmet detection in industrial settings. Leveraging FasterNet, Wise-IoU loss function, and CBAM attention mechanism, the algorithm achieved higher precision and reduced computational complexity. Experimental results demonstrated superior performance compared to existing models, addressing critical safety concerns and paving the way for efficient safety management systems in construction environments.
Chinese researchers present YOLOv8-PG, a lightweight convolutional neural network tailored for accurate detection of real and fake pigeon eggs in challenging environments. By refining key model components and leveraging a novel loss function, YOLOv8-PG outperforms existing models in accuracy while maintaining efficiency, offering promising applications for automated egg collection in pigeon breeding.
The paper explores human action recognition (HAR) methods, emphasizing the transition to deep learning (DL) and computer vision (CV). It discusses the evolution of techniques, including the significance of large datasets and the emergence of HARNet, a DL architecture merging recurrent and convolutional neural networks (CNN).
Researchers explored the integration of artificial intelligence (AI) and machine learning (ML) in two-phase heat transfer research, focusing on boiling and condensation phenomena. AI was utilized for meta-analysis, physical feature extraction, and data stream analysis, offering new insights and solutions to predict multi-phase flow patterns. Interdisciplinary collaboration and sustainable cyberinfrastructures were emphasized for future advancements in thermal management systems and energy conversion devices.
Researchers from China introduce CDI-YOLO, an algorithm marrying coordination attention with YOLOv7-tiny for swift and precise PCB defect detection. With superior accuracy and a balance between parameters and speed, it promises efficient quality control in electronics and beyond.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.