Search Results for author: Avi Singh

Found 18 papers, 4 papers with code

Improving Large Language Model Fine-tuning for Solving Math Problems

no code implementations16 Oct 2023 Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, Peter J. Liu

With these methods, we present a thorough empirical study on a series of PaLM 2 models and find: (1) The quality and style of the step-by-step solutions used for fine-tuning can make a significant impact on the model performance; (2) While solution re-ranking and majority voting are both effective for improving the model performance when used separately, they can also be used together for an even greater performance boost; (3) Multi-task fine-tuning that sequentially separates the solution generation and evaluation tasks can offer improved performance compared with the solution fine-tuning baseline.

Language Modelling Large Language Model +2

Visual Backtracking Teleoperation: A Data Collection Protocol for Offline Image-Based Reinforcement Learning

no code implementations5 Oct 2022 David Brandfonbrener, Stephen Tu, Avi Singh, Stefan Welker, Chad Boodoo, Nikolai Matni, Jake Varley

We find that by adjusting the data collection process we improve the quality of both the learned value functions and policies over a variety of baseline methods for data collection.

Continuous Control Reinforcement Learning (RL)

Parrot: Data-Driven Behavioral Priors for Reinforcement Learning

no code implementations ICLR 2021 Avi Singh, Huihan Liu, Gaoyue Zhou, Albert Yu, Nicholas Rhinehart, Sergey Levine

Reinforcement learning provides a general framework for flexible decision making and control, but requires extensive data collection for each new task that an agent needs to learn.

Decision Making reinforcement-learning +1

COG: Connecting New Skills to Past Experience with Offline Reinforcement Learning

1 code implementation27 Oct 2020 Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine

Reinforcement learning has been applied to a wide variety of robotics problems, but most of such applications involve collecting data from scratch for each new task.

reinforcement-learning Reinforcement Learning (RL)

The Ingredients of Real World Robotic Reinforcement Learning

no code implementations ICLR 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning.

reinforcement-learning Reinforcement Learning (RL)

The Ingredients of Real-World Robotic Reinforcement Learning

no code implementations27 Apr 2020 Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

In this work, we discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.

reinforcement-learning Reinforcement Learning (RL)

Scalable Multi-Task Imitation Learning with Autonomous Improvement

no code implementations25 Feb 2020 Avi Singh, Eric Jang, Alexander Irpan, Daniel Kappler, Murtaza Dalal, Sergey Levine, Mohi Khansari, Chelsea Finn

In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation.

Imitation Learning reinforcement-learning +1

End-to-End Robotic Reinforcement Learning without Reward Engineering

3 code implementations16 Apr 2019 Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, Sergey Levine

In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task.

reinforcement-learning Reinforcement Learning (RL)

Few-Shot Goal Inference for Visuomotor Learning and Planning

no code implementations30 Sep 2018 Annie Xie, Avi Singh, Sergey Levine, Chelsea Finn

To that end, we formulate the few-shot objective learning problem, where the goal is to learn a task objective from only a few example images of successful end states for that task.

reinforcement-learning Reinforcement Learning (RL) +1

Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

no code implementations NeurIPS 2018 Justin Fu, Avi Singh, Dibya Ghosh, Larry Yang, Sergey Levine

We propose variational inverse control with events (VICE), which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available.

Continuous Control reinforcement-learning +1

Divide-and-Conquer Reinforcement Learning

1 code implementation ICLR 2018 Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine

In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice.

Policy Gradient Methods reinforcement-learning +1

GPLAC: Generalizing Vision-Based Robotic Skills using Weakly Labeled Images

no code implementations ICCV 2017 Avi Singh, Larry Yang, Sergey Levine

We show that pairing interaction data from just a single environment with a diverse dataset of weakly labeled data results in greatly improved generalization to unseen environments, and show that this generalization depends on both the auxiliary objective and the attentional architecture that we propose.

Binary Classification Domain Adaptation

Visual Dialog

11 code implementations CVPR 2017 Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra

We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content.

Chatbot Retrieval +1

Brain4Cars: Car That Knows Before You Do via Sensory-Fusion Deep Learning Architecture

no code implementations5 Jan 2016 Ashesh Jain, Hema S. Koppula, Shane Soh, Bharad Raghavan, Avi Singh, Ashutosh Saxena

We introduce a diverse data set with 1180 miles of natural freeway and city driving, and show that we can anticipate maneuvers 3. 5 seconds before they occur in real-time with a precision and recall of 90. 5\% and 87. 4\% respectively.

Recurrent Neural Networks for Driver Activity Anticipation via Sensory-Fusion Architecture

no code implementations16 Sep 2015 Ashesh Jain, Avi Singh, Hema S. Koppula, Shane Soh, Ashutosh Saxena

We introduce a sensory-fusion architecture which jointly learns to anticipate and fuse information from multiple sensory streams.

Cannot find the paper you are looking for? You can Submit a new open access paper.