Search Results for author: Dmitry Kalashnikov

Found 13 papers, 5 papers with code

Hybrid Random Features

1 code implementation ICLR 2022 Krzysztof Choromanski, Haoxian Chen, Han Lin, Yuanzhe Ma, Arijit Sehanobish, Deepali Jain, Michael S Ryoo, Jake Varley, Andy Zeng, Valerii Likhosherstov, Dmitry Kalashnikov, Vikas Sindhwani, Adrian Weller

We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs) that automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest.

Benchmarking

MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale

no code implementations16 Apr 2021 Dmitry Kalashnikov, Jacob Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman

In this paper, we study how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously, sharing exploration, experience, and representations across tasks.

reinforcement-learning Reinforcement Learning (RL)

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

no code implementations15 Apr 2021 Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine

We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data.

Q-Learning reinforcement-learning +1

Visionary: Vision architecture discovery for robot learning

no code implementations26 Mar 2021 Iretiayo Akinola, Anelia Angelova, Yao Lu, Yevgen Chebotar, Dmitry Kalashnikov, Jacob Varley, Julian Ibarz, Michael S. Ryoo

We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs.

Neural Architecture Search Robot Manipulation

Disentangled Planning and Control in Vision Based Robotics via Reward Machines

no code implementations28 Dec 2020 Alberto Camacho, Jacob Varley, Deepali Jain, Atil Iscen, Dmitry Kalashnikov

In this work we augment a Deep Q-Learning agent with a Reward Machine (DQRM) to increase speed of learning vision-based policies for robot tasks, and overcome some of the limitations of DQN that prevent it from converging to good-quality policies.

Q-Learning

Thinking While Moving: Deep Reinforcement Learning with Concurrent Control

no code implementations ICLR 2020 Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog

We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action.

reinforcement-learning Reinforcement Learning (RL) +1

Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras

no code implementations21 Feb 2020 Iretiayo Akinola, Jacob Varley, Dmitry Kalashnikov

In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature.

Camera Calibration

Cannot find the paper you are looking for? You can Submit a new open access paper.