An autonomous vehicle, also known as a self-driving car, is a vehicle capable of sensing its environment and operating without human involvement. It uses a variety of sensors, cameras, lidar, radar, AI, and machine learning algorithms to perceive its surroundings, make decisions, and navigate roads safely.
Researchers explore green AI as a key approach to minimizing AI's environmental impact through energy-efficient algorithms and hardware, driving sustainability without sacrificing performance.
Google's next-generation AI assistant, Gemini, is poised to transform the driving experience by integrating advanced AI capabilities directly into Android Auto, potentially replacing Google Assistant with a more interactive and intuitive in-car system.
Research paper examines the complexities of global AI governance, proposing a cautious approach to developing an international regulatory framework that balances innovation with ethical and societal needs.
MIT researchers demonstrated that large language models (LLMs) could develop an understanding of reality through internal simulations without direct physical experience. This breakthrough in AI suggests LLMs' potential for complex problem-solving across robotics and natural language processing.
This research introduces a framework for verifying Lyapunov-stable neural network controllers, advancing robot safety in dynamic, sensor-driven environments.
A study in Computers & Graphics examined model compression methods for computer vision tasks, enabling AI techniques on resource-limited embedded systems. Researchers compared various techniques, including knowledge distillation and network pruning, highlighting their effectiveness in reducing model size and complexity while maintaining performance, crucial for applications like robotics and medical imaging.
Generative adversarial networks (GANs) have transformed generative modeling since 2014, with significant applications across various fields. Researchers reviewed GAN variants, architectures, validation metrics, and future directions, emphasizing their ongoing challenges and integration with emerging deep learning frameworks.
Researchers presented a two-stage framework utilizing large language models (LLMs) for detecting and addressing anomalies in robotic systems. The fast anomaly classifier operates in an LLM embedding space, while a slower reasoning system ensures safe, trustworthy operation of dynamic robots, mitigating computational costs and enhancing control frameworks.
Researchers introduced a new method for 3D object detection using monocular cameras, improving spatial perception and addressing depth estimation challenges. Their depth-enhanced deep learning approach significantly outperformed existing methods, proving valuable for autonomous driving and other applications requiring precise 3D localization and recognition from single images.
A study in Sensors introduces the RECPO method for safe, robust autonomous highway driving using reinforcement learning (RL). Tested in CARLA simulations, RECPO outperformed traditional methods, achieving zero collisions and improved decision-making stability by transforming the problem into a constrained Markov decision process (CMDP).
Researchers in Nature unveiled a new method for traffic signal control using deep reinforcement learning (DRL) that addresses convergence and robustness issues. The PN_D3QN model, incorporating dueling networks, double Q-learning, priority sampling, and noise parameters, processed high-dimensional traffic data and achieved faster convergence.
Researchers demonstrated how reinforcement learning (RL) can improve guidance, navigation, and control (GNC) systems for unmanned aerial vehicles (UAVs), enhancing robustness and efficiency in tasks like dynamic target interception and waypoint tracking.
Waabi, a generative AI company, raised $200M in a Series B round led by Uber and Khosla Ventures to deploy fully driverless trucks by 2025. Their revolutionary AI system aims to transform autonomous trucking with human-like reasoning and efficient, scalable technology.
A comprehensive review highlights the evolution of object-tracking methods, sensors, and datasets in computer vision, guiding developers in selecting optimal tools for diverse applications.
This study delves into the complex relationship between technology and psychology, examining how individuals perceive androids based on their beliefs about artificial beings. By investigating the impact of labeling human faces as "android," the research illuminates how cognitive processes shape human-robot interaction and social cognition, offering insights for designing more socially acceptable synthetic agents.
This paper presents the groundbreaking lifelong learning optical neural network (L2ONN), offering efficient and scalable AI systems through photonic computing. L2ONN's innovative architecture harnesses sparse photonic connections and parallel processing, surpassing traditional electronic models in efficiency, capacity, and lifelong learning capabilities, with implications for various applications from vision classification to medical diagnosis.
Researchers present a groundbreaking Bayesian learning framework, combining interval continuous-time Markov chain model checking, to verify autonomous robots in challenging conditions. Demonstrated on an underwater vehicle mission, the technique provides robust estimates for mission success, safety, and energy consumption, offering a scalable solution for diverse autonomous systems in uncertain environments.
Researchers from India, Australia, and Hungary introduce a robust model employing a cascade classifier and a vision transformer to detect potholes and traffic signs in challenging conditions on Indian roads. The algorithm, showcasing impressive accuracy and outperforming existing methods, holds promise for improving road safety, infrastructure maintenance, and integration with intelligent transport systems and autonomous vehicles
Researchers present ReAInet, a novel vision model aligning with human brain activity based on non-invasive EEG recordings. The model, derived from the CORnet-S architecture, demonstrates higher similarity to human brain representations, improving adversarial robustness and capturing individual variability, thereby paving the way for more brain-like artificial intelligence systems in computer vision.
Researchers present a groundbreaking Federated Learning (FL) model for passenger demand forecasting in Smart Cities, focusing on the context of Autonomous Taxis (ATs). The FL approach ensures data privacy by allowing ATs in different regions to collaboratively enhance their demand forecasting models without directly sharing sensitive passenger information. The proposed model outperforms traditional methods, showcasing superior accuracy while addressing privacy concerns in the era of smart and autonomous transportation systems.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.