Search Results for author: Ilya Kuzovkin

Found 7 papers, 5 papers with code

Addressing Sample Complexity in Visual Tasks Using HER and Hallucinatory GANs

2 code implementations NeurIPS 2019 Himanshu Sahni, Toby Buckley, Pieter Abbeel, Ilya Kuzovkin

In this work, we show how visual trajectories can be hallucinated to appear successful by altering agent observations using a generative model trained on relatively few snapshots of the goal.

Reinforcement Learning (RL)

Multiagent Cooperation and Competition with Deep Reinforcement Learning

4 code implementations27 Nov 2015 Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, Raul Vicente

In the present work we extend the Deep Q-Learning Network architecture proposed by Google DeepMind to multiagent environments and investigate how two agents controlled by independent Deep Q-Networks interact in the classic videogame Pong.

Q-Learning reinforcement-learning +1

Combining Static and Dynamic Features for Multivariate Sequence Classification

1 code implementation20 Dec 2017 Anna Leontjeva, Ilya Kuzovkin

In real-life scenarios, however, it is often the case that both static and dynamic features are present, or can be extracted from the data.

Classification General Classification

Understanding Information Processing in Human Brain by Interpreting Machine Learning Models

1 code implementation17 Oct 2020 Ilya Kuzovkin

Combined with interpretability techniques, machine learning could replace human modeler and shift the focus of human effort to extracting the knowledge from the ready-made models and articulating that knowledge into intuitive descroptions of reality.

BIG-bench Machine Learning Dimensionality Reduction +1

Direct information transfer rate optimisation for SSVEP-based BCI

1 code implementation19 Jul 2019 Anti Ingel, Ilya Kuzovkin, Raul Vicente

The proposed method shows good performance in classifying targets of a BCI, outperforming previously reported results on the same dataset by a factor of 2 in terms of ITR.

General Classification SSVEP

Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling

no code implementations16 Dec 2022 Ashish Kumar, Ilya Kuzovkin

Although offline learning techniques can learn from data generated by a sub-optimal behavior agent, there is still an opportunity to improve the sample complexity of existing offline reinforcement learning algorithms by strategically introducing human demonstration data into the training process.

Q-Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.