Search Results for author: John M. Dolan

Found 16 papers, 2 papers with code

WROOM: An Autonomous Driving Approach for Off-Road Navigation

1 code implementation12 Apr 2024 Dvij Kalaria, Shreya Sharma, Sarthak Bhagat, Haoru Xue, John M. Dolan

Off-road navigation is a challenging problem both at the planning level to get a smooth trajectory and at the control level to avoid flipping over, hitting obstacles, or getting stuck at a rough patch.

Autonomous Driving Reinforcement Learning (RL) +2

Synthesis and verification of robust-adaptive safe controllers

no code implementations1 Nov 2023 Simin Liu, Kai S. Yun, John M. Dolan, Changliu Liu

Our raCBFs are currently the most effective way to guarantee safety for uncertain systems, achieving 100% safety and up to 55% performance improvement over a robust baseline.

Towards Optimal Head-to-head Autonomous Racing with Curriculum Reinforcement Learning

no code implementations25 Aug 2023 Dvij Kalaria, Qin Lin, John M. Dolan

In this work, we propose a curriculum learning-based framework by transitioning from a simpler vehicle model to a more complex real environment to teach the reinforcement learning agent a policy closer to the optimal policy.

Friction reinforcement-learning +1

Risk-aware Safe Control for Decentralized Multi-agent Systems via Dynamic Responsibility Allocation

no code implementations22 May 2023 Yiwei Lyu, Wenhao Luo, John M. Dolan

Decentralized control schemes are increasingly favored in various domains that involve multi-agent systems due to the need for computational efficiency as well as general applicability to large-scale systems.

Autonomous Driving Computational Efficiency

State Dropout-Based Curriculum Reinforcement Learning for Self-Driving at Unsignalized Intersections

no code implementations10 Jul 2022 Shivesh Khaitan, John M. Dolan

In this work, we address the problem of traversing unsignalized intersections using a novel curriculum for deep reinforcement learning.

Autonomous Driving Motion Planning +2

BATS: Best Action Trajectory Stitching

no code implementations26 Apr 2022 Ian Char, Viraj Mehta, Adam Villaflor, John M. Dolan, Jeff Schneider

Past efforts for developing algorithms in this area have revolved around introducing constraints to online reinforcement learning algorithms to ensure the actions of the learned policy are constrained to the logged data.

reinforcement-learning Reinforcement Learning (RL)

Learning to Robustly Negotiate Bi-Directional Lane Usage in High-Conflict Driving Scenarios

no code implementations22 Mar 2021 Christoph Killing, Adam Villaflor, John M. Dolan

We train policies to robustly negotiate with opposing vehicles of an unobservable degree of cooperativeness using multi-agent reinforcement learning (MARL).

Autonomous Driving Multi-agent Reinforcement Learning

Safe Trajectory Planning Using Reinforcement Learning for Self Driving

no code implementations9 Nov 2020 Josiah Coad, Zhiqian Qiao, John M. Dolan

Self-driving vehicles must be able to act intelligently in diverse and difficult environments, marked by high-dimensional state spaces, a myriad of optimization objectives and complex behaviors.

Imitation Learning reinforcement-learning +2

Behavior Planning at Urban Intersections through Hierarchical Reinforcement Learning

no code implementations9 Nov 2020 Zhiqian Qiao, Jeff Schneider, John M. Dolan

In this work, we propose a behavior planning structure based on reinforcement learning (RL) which is capable of performing autonomous vehicle behavior planning with a hierarchical structure in simulated urban environments.

Autonomous Vehicles Hierarchical Reinforcement Learning +2

Depth Completion via Inductive Fusion of Planar LIDAR and Monocular Camera

no code implementations3 Sep 2020 Chen Fu, Chiyu Dong, Christoph Mertz, John M. Dolan

This late-fusion block uses the dense context features to guide the depth prediction based on demonstrations by sparse depth features.

Autonomous Driving Depth Completion +2

Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning

no code implementations9 Nov 2019 Zhiqian Qiao, Zachariah Tyree, Priyantha Mudalige, Jeff Schneider, John M. Dolan

In this work, we propose a hierarchical reinforcement learning (HRL) structure which is capable of performing autonomous vehicle planning tasks in simulated environments with multiple sub-goals.

Hierarchical Reinforcement Learning reinforcement-learning +1

Human Driver Behavior Prediction based on UrbanFlow

no code implementations9 Nov 2019 Zhiqian Qiao, Jing Zhao, Zachariah Tyree, Priyantha Mudalige, Jeff Schneider, John M. Dolan

How autonomous vehicles and human drivers share public transportation systems is an important problem, as fully automatic transportation environments are still a long way off.

Autonomous Vehicles Decision Making +1

Low-cost LIDAR based Vehicle Pose Estimation and Tracking

no code implementations3 Oct 2019 Chen Fu, Chiyu Dong, Xiao Zhang, John M. Dolan

Based on our previous optimization/criteria-based L-Shape fitting algorithm, we here propose a data-driven and model-based method for robust vehicle segmentation and tracking.

Segmentation Vehicle Pose Estimation

Learning On-Road Visual Control for Self-Driving Vehicles with Auxiliary Tasks

no code implementations19 Dec 2018 Yilun Chen, Praveen Palanisamy, Priyantha Mudalige, Katharina Muelling, John M. Dolan

In this paper, we leverage auxiliary information aside from raw images and design a novel network structure, called Auxiliary Task Network (ATN), to help boost the driving performance while maintaining the advantage of minimal training data and an End-to-End training method.

Optical Flow Estimation Semantic Segmentation +2

Information-Theoretic Approach to Efficient Adaptive Path Planning for Mobile Robotic Environmental Sensing

no code implementations27 May 2013 Kian Hsiang Low, John M. Dolan, Pradeep Khosla

The time complexity of solving MASP approximately depends on the map resolution, which limits its use in large-scale, high-resolution exploration and mapping.

Cannot find the paper you are looking for? You can Submit a new open access paper.