Search Results for author: Michael Everett

Found 32 papers, 16 papers with code

Learning Smooth State-Dependent Traversability from Dense Point Clouds

no code implementations4 Jun 2025 Zihao Dong, Alan Papalia, Leonard Jung, Alenna Spiro, Philip R. Osteen, Christa S. Robison, Michael Everett

A key open challenge in off-road autonomy is that the traversability of terrain often depends on the vehicle's state.

A Hybrid Framework for Efficient Koopman Operator Learning

no code implementations25 Apr 2025 Alexander Estornell, Leonard Jung, Alenna Spiro, Mario Sznaier, Michael Everett

Koopman analysis of a general dynamics system provides a linear Koopman operator and an embedded eigenfunction space, enabling the application of standard techniques from linear analysis.

Operator learning

Learning Verifiable Control Policies Using Relaxed Verification

no code implementations23 Apr 2025 Puja Chaudhury, Alexander Estornell, Michael Everett

To provide safety guarantees for learning-based control systems, recent work has developed formal verification methods to apply after training ends.

Active Learning For Repairable Hardware Systems With Partial Coverage

no code implementations20 Mar 2025 Michael Potter, Beyza Kalkanli, Deniz Erdoğmuş, Michael Everett

Identifying the optimal diagnostic test and hardware system instance to infer reliability characteristics using field data is challenging, especially when constrained by fixed budgets and minimal maintenance cycles.

Active Learning Diagnostic

Continuously Optimizing Radar Placement with Model Predictive Path Integrals

1 code implementation29 May 2024 Michael Potter, Shuo Tang, Paul Ghanem, Milica Stojanovic, Pau Closas, Murat Akcakaya, Ben Wright, Marius Necsoiu, Deniz Erdogmus, Michael Everett, Tales Imbiriba

Continuously optimizing sensor placement is essential for precise target localization in various military and civilian applications.

Collision Avoidance Verification of Multiagent Systems with Learned Policies

1 code implementation5 Mar 2024 Zihao Dong, Shayegan Omidshafiei, Michael Everett

We demonstrate the proposed algorithm can verify collision-free properties of a MA-NFL with agents trained to imitate a collision avoidance algorithm (Reciprocal Velocity Obstacles).

Collision Avoidance

Robust Survival Analysis with Adversarial Regularization

no code implementations26 Dec 2023 Michael Potter, Stefano Maxenti, Michael Everett

Evaluated over 10 SurvSet datasets, our method, Survival Analysis with Adversarial Regularization (SAWAR), consistently outperforms baseline adversarial training methods and state-of-the-art (SOTA) deep SA models across various covariate perturbations with respect to Negative Log Likelihood (NegLL), Integrated Brier Score (IBS), and Concordance Index (CI) metrics.

Adversarial Robustness Survival Analysis

EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy

2 code implementations10 Nov 2023 Xiaoyi Cai, Siddharth Ancha, Lakshay Sharma, Philip R. Osteen, Bernadette Bucher, Stephen Phillips, Jiuguang Wang, Michael Everett, Nicholas Roy, Jonathan P. How

For uncertainty quantification, we efficiently model both aleatoric and epistemic uncertainty by learning discrete traction distributions and probability densities of the traction predictor's latent features.

Uncertainty Quantification

DRIP: Domain Refinement Iteration with Polytopes for Backward Reachability Analysis of Neural Feedback Loops

1 code implementation9 Dec 2022 Michael Everett, Rudy Bunel, Shayegan Omidshafiei

To address this issue, we introduce DRIP, an algorithm with a refinement loop on the relaxation domain, which substantially tightens the BP set bounds.

Collision Avoidance

A Hybrid Partitioning Strategy for Backward Reachability of Neural Feedback Loops

no code implementations14 Oct 2022 Nicholas Rober, Michael Everett, Songan Zhang, Jonathan P. How

We introduce a hybrid partitioning method that uses both target set partitioning (TSP) and backreachable set partitioning (BRSP) to overcome a lower bound on estimation error that is present when using BRSP.

Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

no code implementations28 Sep 2022 Nicholas Rober, Sydney M. Katz, Chelsea Sidrane, Esen Yel, Michael Everett, Mykel J. Kochenderfer, Jonathan P. How

As neural networks (NNs) become more prevalent in safety-critical applications such as control of vehicles, there is a growing need to certify that systems with NN components are safe.

Backward Reachability Analysis for Neural Feedback Loops

2 code implementations14 Apr 2022 Nicholas Rober, Michael Everett, Jonathan P. How

The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods to certify their behavior and guarantee safety.

Collision Avoidance

Risk-Aware Off-Road Navigation via a Learned Speed Distribution Map

1 code implementation25 Mar 2022 Xiaoyi Cai, Michael Everett, Jonathan Fink, Jonathan P. How

Motion planning in off-road environments requires reasoning about both the geometry and semantics of the scene (e. g., a robot may be able to drive through soft bushes but not a fallen log).

Motion Planning Unity

Influencing Long-Term Behavior in Multiagent Reinforcement Learning

1 code implementation7 Mar 2022 Dong-Ki Kim, Matthew Riemer, Miao Liu, Jakob N. Foerster, Michael Everett, Chuangchuang Sun, Gerald Tesauro, Jonathan P. How

An effective approach that has recently emerged for addressing this non-stationarity is for each agent to anticipate the learning of other agents and influence the evolution of future policies towards desirable behavior for its own benefit.

reinforcement-learning Reinforcement Learning +1

Neural Network Verification in Control

1 code implementation30 Sep 2021 Michael Everett

Learning-based methods could provide solutions to many of the long-standing challenges in control.

Deep Reinforcement Learning Reinforcement Learning (RL)

Demonstration-Efficient Guided Policy Search via Imitation of Robust Tube MPC

no code implementations21 Sep 2021 Andrea Tagliabue, Dong-Ki Kim, Michael Everett, Jonathan P. How

Our approach opens the possibility of zero-shot transfer from a single demonstration collected in a nominal domain, such as a simulation or a robot in a lab/controlled environment, to a domain with bounded model errors/perturbations.

Data Augmentation Imitation Learning

Reachability Analysis of Neural Feedback Loops

1 code implementation9 Aug 2021 Michael Everett, Golnaz Habibi, Chuangchuang Sun, Jonathan P. How

While the solutions are less tight than previous (semidefinite program-based) methods, they are substantially faster to compute, and some of those computational time savings can be used to refine the bounds through new input set partitioning techniques, which is shown to dramatically reduce the tightness gap.

Where to go next: Learning a Subgoal Recommendation Policy for Navigation Among Pedestrians

no code implementations25 Feb 2021 Bruno Brito, Michael Everett, Jonathan P. How, Javier Alonso-Mora

Robotic navigation in environments shared with other robots or humans remains challenging because the intentions of the surrounding agents are not directly observable and the environment conditions are continuously changing.

Collision Avoidance Deep Reinforcement Learning +2

Efficient Reachability Analysis of Closed-Loop Systems with Neural Network Controllers

1 code implementation5 Jan 2021 Michael Everett, Golnaz Habibi, Jonathan P. How

Neural Networks (NNs) can provide major empirical performance improvements for robotic systems, but they also introduce challenges in formally analyzing those systems' safety properties.

Robustness Analysis of Neural Networks via Efficient Partitioning with Applications in Control Systems

no code implementations1 Oct 2020 Michael Everett, Golnaz Habibi, Jonathan P. How

Recent works approximate the propagation of sets through nonlinear activations or partition the uncertainty set to provide a guaranteed outer bound on the set of possible NN outputs.

Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning

no code implementations11 Apr 2020 Michael Everett, Bjorn Lutjens, Jonathan P. How

Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness.

Adversarial Robustness Collision Avoidance +3

R-MADDPG for Partially Observable Environments and Limited Communication

1 code implementation16 Feb 2020 Rose E. Wang, Michael Everett, Jonathan P. How

There are several real-world tasks that would benefit from applying multiagent reinforcement learning (MARL) algorithms, including the coordination among self-driving cars.

reinforcement-learning Reinforcement Learning +2

Multi-agent Motion Planning for Dense and Dynamic Environments via Deep Reinforcement Learning

no code implementations18 Jan 2020 Samaneh Hosseini Semnani, Hugh Liu, Michael Everett, Anton de Ruiter, Jonathan P. How

This paper introduces a hybrid algorithm of deep reinforcement learning (RL) and Force-based motion planning (FMP) to solve distributed motion planning problem in dense and dynamic environments.

Deep Reinforcement Learning Motion Planning +1

FASTER: Fast and Safe Trajectory Planner for Navigation in Unknown Environments

2 code implementations9 Jan 2020 Jesus Tordesillas, Brett T. Lopez, Michael Everett, Jonathan P. How

The standard approaches that ensure safety by enforcing a "stop" condition in the free-known space can severely limit the speed of the vehicle, especially in situations where much of the world is unknown.

Motion Planning Trajectory Planning

Certified Adversarial Robustness for Deep Reinforcement Learning

no code implementations28 Oct 2019 Björn Lütjens, Michael Everett, Jonathan P. How

Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness.

Adversarial Robustness Collision Avoidance +3

Planning Beyond the Sensing Horizon Using a Learned Context

1 code implementation24 Aug 2019 Michael Everett, Justin Miller, Jonathan P. How

Context is key information about structured environments that could guide exploration toward the unknown goal location, but the abstract idea is difficult to quantify for use in a planning algorithm.

Image-to-Image Translation

Safe Reinforcement Learning with Model Uncertainty Estimates

no code implementations19 Oct 2018 Björn Lütjens, Michael Everett, Jonathan P. How

The importance of predictions that are robust to this distributional shift is evident for safety-critical applications, such as collision avoidance around pedestrians.

Collision Avoidance model +4

Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning

6 code implementations4 May 2018 Michael Everett, Yu Fan Chen, Jonathan P. How

This work extends our previous approach to develop an algorithm that learns collision avoidance among a variety of types of dynamic agents without assuming they follow any particular behavior rules.

Collision Avoidance Decision Making +5

Socially Aware Motion Planning with Deep Reinforcement Learning

2 code implementations26 Mar 2017 Yu Fan Chen, Michael Everett, Miao Liu, Jonathan P. How

For robotic vehicles to navigate safely and efficiently in pedestrian-rich environments, it is important to model subtle human behaviors and navigation rules (e. g., passing on the right).

Autonomous Navigation Deep Reinforcement Learning +4

Decentralized Non-communicating Multiagent Collision Avoidance with Deep Reinforcement Learning

no code implementations26 Sep 2016 Yu Fan Chen, Miao Liu, Michael Everett, Jonathan P. How

Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e. g. goal) is unobservable to the others.

Multiagent Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.