no code implementations • ICLR 2019 • Alex Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo J. Rezende
We present a soft, spatial, sequential, top-down attention model (S3TA).
no code implementations • 6 Oct 2020 • Yogesh Balaji, Mehrdad Farajtabar, Dong Yin, Alex Mott, Ang Li
However, a degraded performance is observed for ER with small memory.
no code implementations • 19 Jun 2020 • Dong Yin, Mehrdad Farajtabar, Ang Li, Nir Levine, Alex Mott
This problem is often referred to as catastrophic forgetting, a key challenge in continual learning of neural networks.
no code implementations • CVPR 2020 • Daniel Zoran, Mike Chrzanowski, Po-Sen Huang, Sven Gowal, Alex Mott, Pushmeet Kohl
In this paper we propose to augment a modern neural-network architecture with an attention model inspired by human perception.
no code implementations • 15 Oct 2019 • Mehrdad Farajtabar, Navid Azizan, Alex Mott, Ang Li
In this paper, we propose to address this issue from a parameter space perspective and study an approach to restrict the direction of the gradient updates to avoid forgetting previously-learned data.
1 code implementation • 13 Aug 2019 • Alexander Zlokapa, Alex Mott, Joshua Job, Jean-Roch Vlimant, Daniel Lidar, Maria Spiropulu
The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a new class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks.
1 code implementation • NeurIPS 2019 • Alex Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo J. Rezende
Inspired by recent work in attention models for image captioning and question answering, we present a soft attention model for the reinforcement learning domain.