An autonomous vehicle, also known as a self-driving car, is a vehicle capable of sensing its environment and operating without human involvement. It uses a variety of sensors, cameras, lidar, radar, AI, and machine learning algorithms to perceive its surroundings, make decisions, and navigate roads safely.
Researchers demonstrated how reinforcement learning (RL) can improve guidance, navigation, and control (GNC) systems for unmanned aerial vehicles (UAVs), enhancing robustness and efficiency in tasks like dynamic target interception and waypoint tracking.
Waabi, a generative AI company, raised $200M in a Series B round led by Uber and Khosla Ventures to deploy fully driverless trucks by 2025. Their revolutionary AI system aims to transform autonomous trucking with human-like reasoning and efficient, scalable technology.
A comprehensive review highlights the evolution of object-tracking methods, sensors, and datasets in computer vision, guiding developers in selecting optimal tools for diverse applications.
This study delves into the complex relationship between technology and psychology, examining how individuals perceive androids based on their beliefs about artificial beings. By investigating the impact of labeling human faces as "android," the research illuminates how cognitive processes shape human-robot interaction and social cognition, offering insights for designing more socially acceptable synthetic agents.
This paper presents the groundbreaking lifelong learning optical neural network (L2ONN), offering efficient and scalable AI systems through photonic computing. L2ONN's innovative architecture harnesses sparse photonic connections and parallel processing, surpassing traditional electronic models in efficiency, capacity, and lifelong learning capabilities, with implications for various applications from vision classification to medical diagnosis.
Researchers present a groundbreaking Bayesian learning framework, combining interval continuous-time Markov chain model checking, to verify autonomous robots in challenging conditions. Demonstrated on an underwater vehicle mission, the technique provides robust estimates for mission success, safety, and energy consumption, offering a scalable solution for diverse autonomous systems in uncertain environments.
Researchers from India, Australia, and Hungary introduce a robust model employing a cascade classifier and a vision transformer to detect potholes and traffic signs in challenging conditions on Indian roads. The algorithm, showcasing impressive accuracy and outperforming existing methods, holds promise for improving road safety, infrastructure maintenance, and integration with intelligent transport systems and autonomous vehicles
Researchers present ReAInet, a novel vision model aligning with human brain activity based on non-invasive EEG recordings. The model, derived from the CORnet-S architecture, demonstrates higher similarity to human brain representations, improving adversarial robustness and capturing individual variability, thereby paving the way for more brain-like artificial intelligence systems in computer vision.
Researchers present a groundbreaking Federated Learning (FL) model for passenger demand forecasting in Smart Cities, focusing on the context of Autonomous Taxis (ATs). The FL approach ensures data privacy by allowing ATs in different regions to collaboratively enhance their demand forecasting models without directly sharing sensitive passenger information. The proposed model outperforms traditional methods, showcasing superior accuracy while addressing privacy concerns in the era of smart and autonomous transportation systems.
Researchers from the University of Birmingham unveil a novel 3D edge detection technique using unsupervised learning and clustering. This method, offering automatic parameter selection, competitive performance, and robustness, proves invaluable across diverse applications, including robotics, augmented reality, medical imaging, automotive safety, architecture, and manufacturing, marking a significant leap in computer vision capabilities.
Researchers question the notion of artificial intelligence (AI) surpassing human thought. It critiques Max Tegmark's definition of intelligence, highlighting the differences in understanding, implementation of goals, and the crucial role of creativity. The discussion extends to philosophical implications, emphasizing the overlooked aspects of the body, brain lateralization, and the vital role of glia cells, ultimately contending that human thought's richness and complexity remain beyond current AI capabilities.
Researchers introduce a novel multi-task learning approach for recognizing low-resolution text in logistics, addressing challenges in the rapidly growing e-commerce sector. The proposed model, incorporating a super-resolution branch and attention-based decoding, outperforms existing methods, offering substantial accuracy improvements for handling distorted, low-resolution Chinese text.
Researchers introduced Swin-APT, a deep learning-based model for semantic segmentation and object detection in Intelligent Transportation Systems (ITSs). The model, incorporating a Swin-Transformer-based lightweight network and a multiscale adapter network, demonstrated superior performance in road segmentation and marking detection tasks, outperforming existing models on various datasets, including achieving a remarkable 91.2% mIoU on the BDD100K dataset.
This study introduces a sophisticated pedestrian detection algorithm enhancing the lightweight YOLOV5 model for autonomous vehicles. Integrating extensive kernel attention mechanisms, lightweight coordinate attention, and adaptive loss tuning, the algorithm tackles challenges like occlusion and positioning inaccuracies. Experimental results show a noticeable accuracy boost, especially for partially obstructed pedestrians, offering promising advancements for safer interactions between vehicles and pedestrians in complex urban environments.
Researchers propose a groundbreaking framework, PGL, for autonomous and programmable graph representation learning (PGL) in heterogeneous computing systems. Focused on optimizing program execution, especially in applications like autonomous vehicles and machine vision, PGL leverages machine learning to dynamically map software computations onto CPUs and GPUs.
This paper introduces FollowNet, a pioneering initiative addressing challenges in modeling car-following behavior. With a unified benchmark dataset consolidating over 80K car-following events from diverse public driving datasets, FollowNet sets a standard for evaluating and comparing car-following models, overcoming format inconsistencies in existing datasets.
This study delves into customer preferences for automated parcel delivery modes, including autonomous vehicles, drones, sidewalk robots, and bipedal robots, in the context of last-mile logistics. Using an Integrated Nested Choice and Correlated Latent Variable model, the research reveals that cost and time performance significantly influence the acceptability of technology, with a growing willingness to explore novel delivery automation when cost and time align.
Researchers explored the influence of stingy bots in improving human welfare within experimental sharing networks. They conducted online experiments involving artificial agents with varying allocation behaviors, finding that stingy bots, when strategically placed, could enhance collective welfare by enabling reciprocal exchanges between individuals.
Researchers explored the application of distributed learning, particularly Federated Learning (FL), for Internet of Things (IoT) services in the context of emerging 6G networks. They discussed the advantages and challenges of distributed learning in IoT domains, emphasizing its potential for enhancing IoT services while addressing privacy concerns and the need for ongoing research in areas such as security and communication efficiency.
This study introduces a novel approach to autonomous vehicle navigation by leveraging machine vision, machine learning, and artificial intelligence. The research demonstrates that it's possible for vehicles to navigate unmarked roads using economical webcam-based sensing systems and deep learning, offering practical insights into enhancing autonomous driving in real-world scenarios.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.