AI is employed in object detection to identify and locate objects within images or video. It utilizes deep learning techniques, such as convolutional neural networks (CNNs), to analyze visual data, detect objects of interest, and provide bounding box coordinates, enabling applications like autonomous driving, surveillance, and image recognition.
Researchers employed AI and computational vision techniques to improve pedestrian monitoring in crowded train stations. Utilizing YOLOv7 for object detection and AlphaPose for activity recognition, the study successfully tracked passenger movements and activities, providing critical insights for enhancing station safety and efficiency.
Researchers introduced Clus, a novel clustering swap prediction strategy for learning an image-text embedding space, which leverages distillation learning to achieve state-of-the-art performance in tasks like image-text retrieval and visual question answering.
Researchers have developed AI-based computer vision systems to identify growth-stunted salmon, with YoloV7 achieving the highest accuracy. This technology offers efficient and reliable monitoring, improving fish welfare and production in aquaculture.
This study proposes an innovative method for detecting cracks in train rivets using fluorescent magnetic particle detection (FMPFD) and instance segmentation, achieving high accuracy and recall. By enhancing the YOLOv5 algorithm and developing a single coil non-contact magnetization device, the researchers achieved significant improvements in crack detection.
Researchers introduced a multi-stage progressive detection method utilizing a Swin transformer to accurately identify water deficit in vertical greenery plants. By integrating classification, semantic segmentation, and object detection, the approach significantly improved detection accuracy compared to traditional methods like R-CNN and YOLO, offering promising solutions for urban greenery management.
Researchers developed a real-time underwater video processing system leveraging object detection models and edge computing to count Nephrops in demersal trawl fisheries. Through meticulous experimentation, optimal configurations balancing processing speed and accuracy were identified, highlighting the potential for enhanced sustainability through informed catch monitoring.
Researchers introduce a novel method for edge detection in color images by integrating Support Vector Machine (SVM) with Social Spider Optimization (SSO) algorithms. The two-stage approach demonstrates superior accuracy and quality compared to existing methods, offering potential applications in various domains such as object detection and medical image analysis.
Researchers introduced SCB-YOLOv5, integrating ShuffleNet V2 and convolutional block attention modules (CBAM) into YOLOv5 for detecting standardized gymnast movements. SCB-YOLOv5 showed enhanced precision, recall, and mean average precision (mAP), making it effective for on-site athlete action detection. Extensive experiments validated its effectiveness, highlighting its potential for practical sports education in resource-limited settings.
Chinese researchers present YOLOv8-PG, a lightweight convolutional neural network tailored for accurate detection of real and fake pigeon eggs in challenging environments. By refining key model components and leveraging a novel loss function, YOLOv8-PG outperforms existing models in accuracy while maintaining efficiency, offering promising applications for automated egg collection in pigeon breeding.
This study in Nature explores the application of convolutional neural networks (CNNs) in classifying infrared (IR) images for concealed object detection in security scanning. Leveraging a ResNet-50 model and transfer learning, the researchers refined pre-processing techniques such as k-means and fuzzy-c clustering to improve classification accuracy.
Researchers explored the integration of artificial intelligence (AI) and machine learning (ML) in two-phase heat transfer research, focusing on boiling and condensation phenomena. AI was utilized for meta-analysis, physical feature extraction, and data stream analysis, offering new insights and solutions to predict multi-phase flow patterns. Interdisciplinary collaboration and sustainable cyberinfrastructures were emphasized for future advancements in thermal management systems and energy conversion devices.
Researchers from China proposed an innovative method to improve the accuracy of detecting small targets in aerial images captured by unmanned aerial vehicles (UAVs). By introducing a multi-scale detection network that combines different feature information levels, the study aimed to enhance detection accuracy while reducing interference from image backgrounds.
Researchers from China introduce CDI-YOLO, an algorithm marrying coordination attention with YOLOv7-tiny for swift and precise PCB defect detection. With superior accuracy and a balance between parameters and speed, it promises efficient quality control in electronics and beyond.
Researchers in a Scientific Reports paper propose BiFEL-YOLOv5s, an advanced deep learning model, for real-time safety helmet detection in construction settings. By integrating innovative techniques like BiFPN, Focal-EIoU Loss, and Soft-NMS, the model achieves superior accuracy and recall rates while maintaining detection speed, offering a robust solution for safety monitoring in complex work environments.
Researchers introduce SceneScript, a novel method harnessing language commands to reconstruct 3D scenes, bypassing traditional mesh or voxel-based approaches. SceneScript demonstrates state-of-the-art performance in architectural layout estimation and 3D object detection, offering promising applications in virtual reality, augmented reality, robotics, and computer-aided design.
Researchers delve into the realm of object detection, comparing the performance of deep neural networks (DNNs) to human observers under simulated peripheral vision conditions. Through meticulous experimentation and dataset creation, they unveil insights into the nuances of machine and human perception, paving the way for improved alignment and applications in computer vision and artificial intelligence.
Researchers delve into the evolving landscape of crop-yield prediction, leveraging remote sensing and visible light image processing technologies. By dissecting methodologies, technical nuances, and AI-driven solutions, the article illuminates pathways to precision agriculture, aiming to optimize yield estimation and revolutionize agricultural practices.
This research pioneers a breakthrough defect detection system leveraging an upgraded YOLOv4 model, augmented with DBSCAN clustering and ECA-DenseNet-BC-121 features. With unparalleled accuracy and real-time performance, it promises a paradigm shift in industrial surveillance.
Researchers from South China Agricultural University introduce a cutting-edge computer vision algorithm, blending YOLOv5s and StyleGAN, to improve the detection of sandalwood trees using UAV remote sensing data. Addressing the challenges of complex planting environments, this innovative technique achieves remarkable accuracy, revolutionizing sandalwood plantation monitoring and advancing precision agriculture.
Researchers propose RoboEXP, a novel robotic exploration system, utilizing interactive scene exploration to autonomously navigate and interact in dynamic environments. Through perception, memory, decision-making, and action modules, RoboEXP constructs action-conditioned scene graphs (ACSG), demonstrating superior performance in real-world scenarios and facilitating downstream manipulation tasks with diverse objects.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.