Search Results for author: Viraj Mehta

Found 9 papers, 1 papers with code

BATS: Best Action Trajectory Stitching

no code implementations26 Apr 2022 Ian Char, Viraj Mehta, Adam Villaflor, John M. Dolan, Jeff Schneider

Past efforts for developing algorithms in this area have revolved around introducing constraints to online reinforcement learning algorithms to ensure the actions of the learned policy are constrained to the logged data.

reinforcement-learning

Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias

1 code implementation ICLR 2022 Frederic Koehler, Viraj Mehta, Chenghui Zhou, Andrej Risteski

Recent work by Dai and Wipf (2020) proposes a two-stage training algorithm for VAEs, based on a conjecture that in standard VAE training the generator will converge to a solution with 0 variance which is correctly supported on the ground truth manifold.

An Experimental Design Perspective on Model-Based Reinforcement Learning

no code implementations9 Dec 2021 Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, Willie Neiswanger

In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning.

Continuous Control Experimental Design +2

An Experimental Design Perspective on Exploration in Reinforcement Learning

no code implementations ICLR 2022 Viraj Mehta, Biswajit Paria, Jeff Schneider, Willie Neiswanger, Stefano Ermon

In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning.

Continuous Control Experimental Design +1

Representational aspects of depth and conditioning in normalizing flows

no code implementations2 Oct 2020 Frederic Koehler, Viraj Mehta, Andrej Risteski

Normalizing flows are among the most popular paradigms in generative modeling, especially for images, primarily because we can efficiently evaluate the likelihood of a data point.

Neural Dynamical Systems: Balancing Structure and Flexibility in Physical Prediction

no code implementations23 Jun 2020 Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider

We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models in various gray-box settings which incorporates prior knowledge in the form of systems of ordinary differential equations.

Neural Dynamical Systems

no code implementations ICLR Workshop DeepDiffEq 2019 Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider

We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models which incorporates prior knowledge in the form of systems of ordinary differential equations.

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision

no code implementations25 Jun 2018 Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese

We perform both simulated and real-world experiments on two tool-based manipulation tasks: sweeping and hammering.

DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

no code implementations11 Aug 2017 Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak, Christopher Choy, Silvio Savarese

We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins.

3D Reconstruction 3D Shape Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.