Computer Vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. By using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects, and then react to what they "see."
This paper explores advanced drowning prevention technologies that integrate embedded systems, artificial intelligence (AI), and the Internet of Things (IoT) to enhance real-time monitoring and response in swimming pools. By utilizing computer vision and deep learning for accurate situation identification and IoT for real-time alerts, these systems significantly improve rescue efficiency and reduce drowning incidents
Researchers developed TeaPoseNet, a deep neural network for estimating tea leaf poses, focusing on the Yinghong No.9 variety. Trained on a custom dataset, TeaPoseNet improved pose recognition accuracy by 16.33% using a novel algorithm, enhancing tea leaf analysis.
A systematic tertiary study analyzed 57 secondary studies from 2018 to 2023 on using drone imagery for infrastructure management. The research identified key application areas, assessed trends, and highlighted challenges, providing a valuable reference for researchers and practitioners in the field.
Researchers developed a three-step computer vision framework using YOLOv8 and image processing techniques for efficient concrete crack detection and measurement. The method demonstrated high accuracy but faced challenges with small cracks, complex backgrounds, and pre-marked reference frames.
An innovative AI-driven platform, HeinSight3.0, integrates computer vision to monitor and analyze liquid-liquid extraction processes in real-time. Utilizing machine learning for visual cues like liquid levels and turbidity, this system significantly optimizes LLE, paving the way for autonomous lab operations.
A scaleless monocular vision method accurately measures plant heights by converting color images to binary data. Achieving high precision within 2–3 meters and minimal error, this non-contact technique demonstrates potential for reliable plant height measurement under varied lighting conditions.
A study in Computers & Graphics examined model compression methods for computer vision tasks, enabling AI techniques on resource-limited embedded systems. Researchers compared various techniques, including knowledge distillation and network pruning, highlighting their effectiveness in reducing model size and complexity while maintaining performance, crucial for applications like robotics and medical imaging.
Researchers leverage AI to optimize the design, fabrication, and performance forecasting of diffractive optical elements (DOEs). This integration accelerates innovation in optical technology, enhancing applications in imaging, sensing, and telecommunications.
Researchers have developed an automated system using computer vision (CV) and a collaborative robot (cobot) to objectively assess the rehydration quality of infant formula by measuring foam height, sediment height, and white particles. The system's accuracy in estimating these attributes closely matched human ratings, offering a reliable alternative for quality control in powdered formula rehydration.
Researchers developed an automated system using computer vision and machine learning to detect early-stage lameness in sows. The system, trained on video data and evaluated by experts, accurately tracked key points on sows' bodies, providing a precise livestock farming tool to assess locomotion and enhance animal welfare.
Generative adversarial networks (GANs) have transformed generative modeling since 2014, with significant applications across various fields. Researchers reviewed GAN variants, architectures, validation metrics, and future directions, emphasizing their ongoing challenges and integration with emerging deep learning frameworks.
Researchers detailed the impact of computer vision in textile manufacturing, focusing on identifying fabric imperfections and measuring cotton composition. They introduced a dataset of 1300 fabric images, expanded to 27,300 through augmentation, covering cotton percentages from 30% to 99%. This dataset aids in training machine learning models, streamlining traditionally labor-intensive cotton content assessments, and enhancing automation in the textile industry.
Published in Intelligent Systems with Applications, this study introduces SensorNet, a hybrid model combining deep learning (DL) with chemical sensor data to detect toxic additives in fruits like formaldehyde. SensorNet integrates convolutional layers for image analysis and sensor data preprocessing, achieving a high accuracy of 97.03% in distinguishing fresh from chemically treated fruits.
Researchers introduced the Virtual Experience Toolkit (VET) in the journal Sensors, utilizing deep learning and computer vision for automated 3D scene virtualization in VR environments. VET employs advanced techniques like BundleFusion for reconstruction, semantic segmentation with O-CNN, and CAD retrieval via ScanNotate to enhance realism and immersion.
Researchers used AI models to analyze Flickr images from global protected areas, identifying cultural ecosystem services (CES) activities. Their study reveals distinct regional patterns and underscores the value of social media data for conservation management.
Researchers developed ORACLE, an advanced computer vision model utilizing YOLO architecture for automated bird detection and tracking from drone footage. Achieving a 91.89% mean average precision, ORACLE significantly enhances wildlife conservation by accurately identifying and monitoring avian species in dynamic environments.
Researchers developed a neural network (NN) architecture based on You Only Look Once (YOLO) to automate the detection, classification, and quantification of mussel larvae from microscopic water samples.
Researchers have developed a markerless computer vision method to measure aircraft model attitudes during dynamic wind tunnel testing, providing accurate estimations without altering aerodynamic properties. This novel technique, validated through simulations and real-world data, significantly improves accuracy over traditional methods and offers versatility for various aircraft designs and applications.
Researchers have developed a bridge inspection method using computer vision and augmented reality (AR) to enhance fatigue crack detection. This innovative approach utilizes AR headset videos and computer vision algorithms to detect cracks, displaying results as holograms for improved visualization and decision-making.
Researchers introduced EMULATE, a novel gaze data augmentation library based on physiological principles, to address the challenge of limited annotated medical data in eye movement AI analysis. This approach demonstrated significant improvements in model stability and generalization, offering a promising advancement for precision and reliability in medical applications.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.