Search Results for author: Rishi Veerapaneni

Found 6 papers, 1 papers with code

Scaling Lifelong Multi-Agent Path Finding to More Realistic Settings: Research Challenges and Opportunities

no code implementations24 Apr 2024 He Jiang, Yulun Zhang, Rishi Veerapaneni, Jiaoyang Li

We present future directions such as developing more competitive rule-based and anytime MAPF algorithms and parallelizing state-of-the-art MAPF algorithms.

Improving Learnt Local MAPF Policies with Heuristic Search

no code implementations29 Mar 2024 Rishi Veerapaneni, Qian Wang, Kevin Ren, Arthur Jakobsson, Jiaoyang Li, Maxim Likhachev

Multi-agent path finding (MAPF) is the problem of finding collision-free paths for a team of agents to reach their goal locations.

Multi-Agent Path Finding

Bidirectional Temporal Plan Graph: Enabling Switchable Passing Orders for More Efficient Multi-Agent Path Finding Plan Execution

no code implementations30 Dec 2023 Yifan Su, Rishi Veerapaneni, Jiaoyang Li

To overcome this issue, we introduce a new graphical representation called a Bidirectional Temporal Plan Graph (BTPG), which allows switching passing orders during execution to avoid unnecessary waiting time.

Multi-Agent Path Finding

Non-Blocking Batch A* (Technical Report)

no code implementations15 Aug 2022 Rishi Veerapaneni, Maxim Likhachev

We show how this subtle but important change can lead to substantial reductions in expansions compared to the current blocking alternative, and see that the performance is related to the information difference between the batch computed NN and fast non-NN heuristic.

Blocking

Effective Integration of Weighted Cost-to-go and Conflict Heuristic within Suboptimal CBS

no code implementations23 May 2022 Rishi Veerapaneni, Tushar Kusnur, Maxim Likhachev

In this paper, we show that, contrary to prevailing CBS beliefs, a weighted cost-to-go heuristic can be used effectively alongside the conflict heuristic in two possible variants.

Multi-Agent Path Finding

Entity Abstraction in Visual Model-Based Reinforcement Learning

1 code implementation28 Oct 2019 Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine

This paper tests the hypothesis that modeling a scene in terms of entities and their local interactions, as opposed to modeling the scene globally, provides a significant benefit in generalizing to physical tasks in a combinatorial space the learner has not encountered before.

Model-based Reinforcement Learning Object +5

Cannot find the paper you are looking for? You can Submit a new open access paper.