Computer Vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. By using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects, and then react to what they "see."
Researchers developed ORACLE, an advanced computer vision model utilizing YOLO architecture for automated bird detection and tracking from drone footage. Achieving a 91.89% mean average precision, ORACLE significantly enhances wildlife conservation by accurately identifying and monitoring avian species in dynamic environments.
Researchers developed a neural network (NN) architecture based on You Only Look Once (YOLO) to automate the detection, classification, and quantification of mussel larvae from microscopic water samples.
Researchers have developed a markerless computer vision method to measure aircraft model attitudes during dynamic wind tunnel testing, providing accurate estimations without altering aerodynamic properties. This novel technique, validated through simulations and real-world data, significantly improves accuracy over traditional methods and offers versatility for various aircraft designs and applications.
Researchers have developed a bridge inspection method using computer vision and augmented reality (AR) to enhance fatigue crack detection. This innovative approach utilizes AR headset videos and computer vision algorithms to detect cracks, displaying results as holograms for improved visualization and decision-making.
Researchers introduced EMULATE, a novel gaze data augmentation library based on physiological principles, to address the challenge of limited annotated medical data in eye movement AI analysis. This approach demonstrated significant improvements in model stability and generalization, offering a promising advancement for precision and reliability in medical applications.
Researchers compared traditional feature-based computer vision methods with CNN-based deep learning for weed classification in precision farming, emphasizing the former's effectiveness with smaller datasets
Researchers have introduced the human behavior detection dataset (HBDset) for computer vision applications in emergency evacuations, focusing on vulnerable groups like the elderly and disabled.
Researchers developed an automated system utilizing UAVs and deep learning to monitor and maintain remote gravel runways in Northern Canada. This system accurately detects defects and evaluates runway smoothness, proving more effective and reliable than traditional manual methods in harsh and isolated environments.
A systematic review in the journal Sensors analyzed 77 studies on facial and pose emotion recognition using deep learning, highlighting methods like CNNs and Vision Transformers. The review examined trends, datasets, and applications, providing insights into state-of-the-art techniques and their effectiveness in psychology, healthcare, and entertainment.
A comprehensive review highlights the evolution of object-tracking methods, sensors, and datasets in computer vision, guiding developers in selecting optimal tools for diverse applications.
Researchers have developed AI-based computer vision systems to identify growth-stunted salmon, with YoloV7 achieving the highest accuracy. This technology offers efficient and reliable monitoring, improving fish welfare and production in aquaculture.
Researchers from China have integrated computer vision (CV) and LiDAR technologies to improve the safety and efficiency of autonomous navigation in port channels. This innovative approach utilizes advanced path-planning and collision prediction algorithms to create a comprehensive perception of the port environment, significantly enhancing navigation safety and reducing collision risks.
Researchers demonstrated a novel approach to structural health monitoring (SHM) in seismic contexts, combining self-sensing concrete beams, vision-based crack assessment, and AI-based prediction models. The study showed that electrical impedance measurements and the AI-based Prophet model significantly improved the accuracy of load and crack predictions, offering a robust solution for real-time SHM and early warning systems.
Researchers introduced a groundbreaking silent speech interface (SSI) leveraging few-layer graphene (FLG) strain sensing technology and AI-based self-adaptation. Embedded into a biocompatible smart choker, the sensor achieved high accuracy and computational efficiency, revolutionizing communication in challenging environments.
Researchers introduced a novel deep learning approach based on faster R-CNN for segmenting vehicles in traffic videos, addressing challenges like occlusions and varying traffic densities. Through adaptive background modeling and topological active nets, the method achieved superior segmentation accuracy, showcasing its potential to enhance real-world traffic surveillance and management systems.
In a recent article published in Sensors, researchers conducted a thorough review of motion capture technology (MCT) in sports, comparing and evaluating various systems including cinematography capture, electromagnetic capture, computer vision capture, and multimodal capture.
Researchers harness convolutional neural networks (CNNs) to recognize Shen embroidery, achieving 98.45% accuracy. By employing transfer learning and enhancing MobileNet V1 with spatial pyramid pooling, they provide crucial technical support for safeguarding this cultural art form.
Researchers introduced a multi-stage progressive detection method utilizing a Swin transformer to accurately identify water deficit in vertical greenery plants. By integrating classification, semantic segmentation, and object detection, the approach significantly improved detection accuracy compared to traditional methods like R-CNN and YOLO, offering promising solutions for urban greenery management.
A recent study in Scientific Reports presents a novel framework for assessing urban heat exposure using Smart City Digital Twins (SCDT). By integrating meteorological sensors, computer vision, and predictive models, researchers demonstrated the effectiveness of SCDT in monitoring and forecasting heat stress, offering potential solutions to mitigate the impact of heat exposure in urban environments.
Researchers present a groundbreaking study on the crystallization kinetics of (Ba,Ra)SO4 solid solutions, vital in subsurface energy applications. Leveraging microfluidic experiments coupled with computer vision techniques, they unveil crystal growth rates and morphologies, overcoming challenges posed by radium's radioactivity.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.