An autonomous vehicle, also known as a self-driving car, is a vehicle capable of sensing its environment and operating without human involvement. It uses a variety of sensors, cameras, lidar, radar, AI, and machine learning algorithms to perceive its surroundings, make decisions, and navigate roads safely.
Researchers from the University of Birmingham unveil a novel 3D edge detection technique using unsupervised learning and clustering. This method, offering automatic parameter selection, competitive performance, and robustness, proves invaluable across diverse applications, including robotics, augmented reality, medical imaging, automotive safety, architecture, and manufacturing, marking a significant leap in computer vision capabilities.
Researchers question the notion of artificial intelligence (AI) surpassing human thought. It critiques Max Tegmark's definition of intelligence, highlighting the differences in understanding, implementation of goals, and the crucial role of creativity. The discussion extends to philosophical implications, emphasizing the overlooked aspects of the body, brain lateralization, and the vital role of glia cells, ultimately contending that human thought's richness and complexity remain beyond current AI capabilities.
Researchers introduce a novel multi-task learning approach for recognizing low-resolution text in logistics, addressing challenges in the rapidly growing e-commerce sector. The proposed model, incorporating a super-resolution branch and attention-based decoding, outperforms existing methods, offering substantial accuracy improvements for handling distorted, low-resolution Chinese text.
Researchers introduced Swin-APT, a deep learning-based model for semantic segmentation and object detection in Intelligent Transportation Systems (ITSs). The model, incorporating a Swin-Transformer-based lightweight network and a multiscale adapter network, demonstrated superior performance in road segmentation and marking detection tasks, outperforming existing models on various datasets, including achieving a remarkable 91.2% mIoU on the BDD100K dataset.
This study introduces a sophisticated pedestrian detection algorithm enhancing the lightweight YOLOV5 model for autonomous vehicles. Integrating extensive kernel attention mechanisms, lightweight coordinate attention, and adaptive loss tuning, the algorithm tackles challenges like occlusion and positioning inaccuracies. Experimental results show a noticeable accuracy boost, especially for partially obstructed pedestrians, offering promising advancements for safer interactions between vehicles and pedestrians in complex urban environments.
Researchers propose a groundbreaking framework, PGL, for autonomous and programmable graph representation learning (PGL) in heterogeneous computing systems. Focused on optimizing program execution, especially in applications like autonomous vehicles and machine vision, PGL leverages machine learning to dynamically map software computations onto CPUs and GPUs.
This paper introduces FollowNet, a pioneering initiative addressing challenges in modeling car-following behavior. With a unified benchmark dataset consolidating over 80K car-following events from diverse public driving datasets, FollowNet sets a standard for evaluating and comparing car-following models, overcoming format inconsistencies in existing datasets.
This study delves into customer preferences for automated parcel delivery modes, including autonomous vehicles, drones, sidewalk robots, and bipedal robots, in the context of last-mile logistics. Using an Integrated Nested Choice and Correlated Latent Variable model, the research reveals that cost and time performance significantly influence the acceptability of technology, with a growing willingness to explore novel delivery automation when cost and time align.
Researchers explored the influence of stingy bots in improving human welfare within experimental sharing networks. They conducted online experiments involving artificial agents with varying allocation behaviors, finding that stingy bots, when strategically placed, could enhance collective welfare by enabling reciprocal exchanges between individuals.
Researchers explored the application of distributed learning, particularly Federated Learning (FL), for Internet of Things (IoT) services in the context of emerging 6G networks. They discussed the advantages and challenges of distributed learning in IoT domains, emphasizing its potential for enhancing IoT services while addressing privacy concerns and the need for ongoing research in areas such as security and communication efficiency.
This study introduces a novel approach to autonomous vehicle navigation by leveraging machine vision, machine learning, and artificial intelligence. The research demonstrates that it's possible for vehicles to navigate unmarked roads using economical webcam-based sensing systems and deep learning, offering practical insights into enhancing autonomous driving in real-world scenarios.
This research presents a novel bi-level programming model aimed at improving Transit Signal Priority (TSP) systems to reduce delays for private vehicles. By considering both public and private transportation, utilizing a game theory approach and genetic algorithms, the study offers a comprehensive solution for optimizing urban traffic flow.
Researchers present MGB-YOLO, an advanced deep learning model designed for real-time road manhole cover detection. Through a combination of MobileNet-V3, GAM, and BottleneckCSP, this model offers superior precision and computational efficiency compared to existing methods, with promising applications in traffic safety and infrastructure maintenance.
Researchers have developed a groundbreaking framework for training privacy-preserving models that anonymize license plates and faces captured on fisheye camera images used in autonomous vehicles. This innovation addresses growing data privacy concerns and ensures compliance with data protection regulations while improving the adaptability of models for fisheye data.
Researchers introduce NUMERLA, an algorithm that combines meta-reinforcement learning and symbolic logic-based constraints to enable real-time policy adjustments for self-driving cars while maintaining safety. Experiments in simulated urban driving scenarios demonstrate NUMERLA's ability to handle varying traffic conditions and unpredictable pedestrians, highlighting its potential to enhance the development of safe and adaptable autonomous vehicles.
Researchers present an AI-driven solution for autonomous cars, leveraging neural networks and computer vision algorithms to achieve successful autonomous driving in a simulated environment and real-world competition, marking a significant step toward safer and efficient self-driving technology.
Researchers explore the innovative concept of Qualitative eXplainable Graphs (QXGs) for spatiotemporal reasoning in automated driving scenes. Learn how QXGs efficiently capture complex relationships, enhance transparency, and contribute to the trustworthy development of autonomous vehicles. This groundbreaking approach revolutionizes automated driving interpretation and sets a new standard for dependable AI systems.
Researchers delve into the vulnerabilities of machine learning (ML) systems, specifically concerning adversarial attacks. Despite the remarkable strides made by deep learning in various tasks, this study uncovers how ML models are susceptible to adversarial examples—subtle input modifications that mislead models' predictions. The research emphasizes the critical need for understanding these vulnerabilities as ML systems are increasingly integrated into real-world applications.
Researchers have introduced a groundbreaking solution, the Class Attention Map-Based Flare Removal Network (CAM-FRN), to tackle the challenge of lens flare artifacts in autonomous driving scenarios. This innovative approach leverages computer vision and artificial intelligence technologies to accurately detect and remove lens flare, significantly improving object detection and semantic segmentation accuracy.
Video-FocalNets present an innovative architecture that combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) for efficient and accurate video action recognition. By leveraging the spatio-temporal focal modulation technique, Video-FocalNets capture both local and global contexts, offering superior performance and computational efficiency compared to previous methods.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.