Search Results for author: Nishanth Kumar

Found 17 papers, 5 papers with code

Seeing is Believing: Belief-Space Planning with Foundation Models as Uncertainty Estimators

no code implementations4 Apr 2025 Linfeng Zhao, Willie McClinton, Aidan Curtis, Nishanth Kumar, Tom Silver, Leslie Pack Kaelbling, Lawson L. S. Wong

A promising approach to address these challenges involves planning with a library of parameterized skills, where a task planner sequences these skills to achieve goals specified in structured languages, such as logical expressions over symbolic facts.

Predicate Invention from Pixels via Pretrained Vision-Language Models

no code implementations31 Dec 2024 Ashay Athalye, Nishanth Kumar, Tom Silver, Yichao Liang, Tomás Lozano-Pérez, Leslie Pack Kaelbling

Our aim is to learn to solve long-horizon decision-making problems in highly-variable, combinatorially-complex robotics domains given raw sensor input in the form of images.

Decision Making

VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning

no code implementations30 Oct 2024 Yichao Liang, Nishanth Kumar, Hao Tang, Adrian Weller, Joshua B. Tenenbaum, Tom Silver, João F. Henriques, Kevin Ellis

Broadly intelligent agents should form task-specific abstractions that selectively expose the essential elements of a task, while abstracting away the complexity of the raw sensorimotor space.

Hierarchical Reinforcement Learning Language Modeling +2

Learning to Bridge the Gap: Efficient Novelty Recovery with Planning and Reinforcement Learning

no code implementations28 Sep 2024 Alicia Li, Nishanth Kumar, Tomás Lozano-Pérez, Leslie Kaelbling

We introduce a simple formulation for such learning, where the RL problem is constructed with a special ``CallPlanner'' action that terminates the bridge policy and hands control of the agent back to the planner.

Reinforcement Learning (RL)

Trust the PRoC3S: Solving Long-Horizon Robotics Problems with LLMs and Constraint Satisfaction

no code implementations8 Jun 2024 Aidan Curtis, Nishanth Kumar, Jing Cao, Tomás Lozano-Pérez, Leslie Pack Kaelbling

Recent developments in pretrained large language models (LLMs) applied to robotics have demonstrated their capacity for sequencing a set of discrete skills to achieve open-ended goals in simple robotic tasks.

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

no code implementations22 Feb 2024 Nishanth Kumar, Tom Silver, Willie McClinton, Linfeng Zhao, Stephen Proulx, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Jennifer Barry

We consider a setting where a robot is initially equipped with (1) a library of parameterized skills, (2) an AI planner for sequencing together the skills given a goal, and (3) a very general prior distribution for selecting skill parameters.

Active Learning Decision Making

Learning Efficient Abstract Planning Models that Choose What to Predict

1 code implementation16 Aug 2022 Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tomás Lozano-Pérez, Leslie Pack Kaelbling

An effective approach to solving long-horizon tasks in robotics domains with continuous state and action spaces is bilevel planning, wherein a high-level search over an abstraction of an environment is used to guide low-level decision-making.

Decision Making Operator learning

Predicate Invention for Bilevel Planning

1 code implementation17 Mar 2022 Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective.

PGMax: Factor Graphs for Discrete Probabilistic Graphical Models and Loopy Belief Propagation in JAX

2 code implementations8 Feb 2022 Guangyao Zhou, Antoine Dedieu, Nishanth Kumar, Wolfgang Lehrach, Miguel Lázaro-Gredilla, Shrinu Kushagra, Dileep George

PGMax is an open-source Python package for (a) easily specifying discrete Probabilistic Graphical Models (PGMs) as factor graphs; and (b) automatically running efficient and scalable loopy belief propagation (LBP) in JAX.

Just Label What You Need: Fine-Grained Active Selection for Perception and Prediction through Partially Labeled Scenes

no code implementations8 Apr 2021 Sean Segal, Nishanth Kumar, Sergio Casas, Wenyuan Zeng, Mengye Ren, Jingkang Wang, Raquel Urtasun

As data collection is often significantly cheaper than labeling in this domain, the decision of which subset of examples to label can have a profound impact on model performance.

Active Learning

Task Scoping: Generating Task-Specific Abstractions for Planning in Open-Scope Models

no code implementations17 Oct 2020 Michael Fishman, Nishanth Kumar, Cameron Allen, Natasha Danas, Michael Littman, Stefanie Tellex, George Konidaris

Unfortunately, planning to solve any specific task using an open-scope model is computationally intractable - even for state-of-the-art methods - due to the many states and actions that are necessarily present in the model but irrelevant to that problem.

Minecraft

Learning Deep Parameterized Skills from Demonstration for Re-targetable Visuomotor Control

2 code implementations23 Oct 2019 Jonathan Chang, Nishanth Kumar, Sean Hastings, Aaron Gokaslan, Diego Romeres, Devesh Jha, Daniel Nikovski, George Konidaris, Stefanie Tellex

We demonstrate that our model trained on 33% of the possible goals is able to generalize to more than 90% of the targets in the scene for both simulation and robot experiments.

Rate of Change Analysis for Interestingness Measures

no code implementations14 Dec 2017 Nandan Sudarsanam, Nishanth Kumar, Abhishek Sharma, Balaraman Ravindran

We present a comprehensive analysis of 50 interestingness measures and classify them in accordance with the two properties.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.