An autonomous vehicle, also known as a self-driving car, is a vehicle capable of sensing its environment and operating without human involvement. It uses a variety of sensors, cameras, lidar, radar, AI, and machine learning algorithms to perceive its surroundings, make decisions, and navigate roads safely.
University of Toronto engineers have developed a machine learning framework, AIDED, to rapidly optimize 3D metal printing settings, reducing trial and error.
Scientists have developed an AI-powered Intelligent Acting Digital Twin (IADT) that can autonomously control real-world machines in real-time, marking a shift from passive monitoring to active decision-making.
Researchers from Hohai University have developed a data-driven model to enhance GNSS positioning accuracy by systematically analyzing error-inducing factors and leveraging machine learning techniques, improving precision in signal-challenged environments.
Researchers have developed the BIG framework, a brain-inspired navigation system that enhances efficiency and reduces computational demands for autonomous exploration in complex environments.
Researchers from NJIT, Rutgers, and Temple University are developing AI security education programs to address adversarial machine learning threats, aiming to equip future engineers with robust defense strategies.
Study introduces a robust RGB-D dataset for 6D pose estimation, enabling robots to perform industrial pick-and-place tasks with greater precision. The dataset's evaluation with cutting-edge models highlights its potential for advancing robotic automation.
Study reveals that while Hour of Code activities excel in introducing AI basics, they often lack depth, hands-on creativity, and critical engagement needed for a well-rounded understanding of artificial intelligence
Explore GPD-1's transformative approach to motion planning and traffic simulation for smarter vehicles.
Researchers accelerate 3D LiDAR scene completion for autonomous vehicles using a novel distillation framework, achieving 5x faster results while maintaining superior quality.
Researchers explore green AI as a key approach to minimizing AI's environmental impact through energy-efficient algorithms and hardware, driving sustainability without sacrificing performance.
Google's next-generation AI assistant, Gemini, is poised to transform the driving experience by integrating advanced AI capabilities directly into Android Auto, potentially replacing Google Assistant with a more interactive and intuitive in-car system.
Research paper examines the complexities of global AI governance, proposing a cautious approach to developing an international regulatory framework that balances innovation with ethical and societal needs.
MIT researchers demonstrated that large language models (LLMs) could develop an understanding of reality through internal simulations without direct physical experience. This breakthrough in AI suggests LLMs' potential for complex problem-solving across robotics and natural language processing.
This research introduces a framework for verifying Lyapunov-stable neural network controllers, advancing robot safety in dynamic, sensor-driven environments.
A study in Computers & Graphics examined model compression methods for computer vision tasks, enabling AI techniques on resource-limited embedded systems. Researchers compared various techniques, including knowledge distillation and network pruning, highlighting their effectiveness in reducing model size and complexity while maintaining performance, crucial for applications like robotics and medical imaging.
Generative adversarial networks (GANs) have transformed generative modeling since 2014, with significant applications across various fields. Researchers reviewed GAN variants, architectures, validation metrics, and future directions, emphasizing their ongoing challenges and integration with emerging deep learning frameworks.
Researchers presented a two-stage framework utilizing large language models (LLMs) for detecting and addressing anomalies in robotic systems. The fast anomaly classifier operates in an LLM embedding space, while a slower reasoning system ensures safe, trustworthy operation of dynamic robots, mitigating computational costs and enhancing control frameworks.
Researchers introduced a new method for 3D object detection using monocular cameras, improving spatial perception and addressing depth estimation challenges. Their depth-enhanced deep learning approach significantly outperformed existing methods, proving valuable for autonomous driving and other applications requiring precise 3D localization and recognition from single images.
A study in Sensors introduces the RECPO method for safe, robust autonomous highway driving using reinforcement learning (RL). Tested in CARLA simulations, RECPO outperformed traditional methods, achieving zero collisions and improved decision-making stability by transforming the problem into a constrained Markov decision process (CMDP).
Researchers in Nature unveiled a new method for traffic signal control using deep reinforcement learning (DRL) that addresses convergence and robustness issues. The PN_D3QN model, incorporating dueling networks, double Q-learning, priority sampling, and noise parameters, processed high-dimensional traffic data and achieved faster convergence.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.