Search Results for author: Dmitrii Krylov

Found 4 papers, 2 papers with code

Moonwalk: Inverse-Forward Differentiation

no code implementations22 Feb 2024 Dmitrii Krylov, Armin Karamzade, Roy Fox

Our method, Moonwalk, has a time complexity linear in the depth of the network, unlike the quadratic time complexity of na\"ive forward, and empirically reduces computation time by several orders of magnitude without allocating more memory.

Align Your Intents: Offline Imitation Learning via Optimal Transport

no code implementations20 Feb 2024 Maksim Bobrin, Nazar Buzun, Dmitrii Krylov, Dmitry V. Dylov

We report that AILOT outperforms state-of-the art offline imitation learning algorithms on D4RL benchmarks and improves the performance of other offline RL algorithms in the sparse-reward tasks.

D4RL Imitation Learning +2

Learning to Design Analog Circuits to Meet Threshold Specifications

1 code implementation25 Jul 2023 Dmitrii Krylov, Pooya Khajeh, Junhan Ouyang, Thomas Reeves, Tongkai Liu, Hiba Ajmal, Hamidreza Aghasi, Roy Fox

In this work, we propose a method for generating from simulation data a dataset on which a system can be trained via supervised learning to design circuits to meet threshold specifications.

Reinforcement Learning Framework for Deep Brain Stimulation Study

1 code implementation22 Feb 2020 Dmitrii Krylov, Remi Tachet, Romain Laroche, Michael Rosenblum, Dmitry V. Dylov

Malfunctioning neurons in the brain sometimes operate synchronously, reportedly causing many neurological diseases, e. g. Parkinson's.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.