Search Results for author: Max Schwarzer

Found 11 papers, 7 papers with code

Improving Human Text Simplification with Sentence Fusion

no code implementations NAACL (TextGraphs) 2021 Max Schwarzer, Teerapaun Tanprasert, David Kauchak

The quality of fully automated text simplification systems is not good enough for use in real-world settings; instead, human simplifications are used.

Sentence Fusion Text Simplification

Beyond Tabula Rasa: Reincarnating Reinforcement Learning

1 code implementation3 Jun 2022 Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare

To address these issues, we present reincarnating RL as an alternative workflow, where prior computational work (e. g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another.

Atari Games reinforcement-learning

The Primacy Bias in Deep Reinforcement Learning

1 code implementation16 May 2022 Evgenii Nikishin, Max Schwarzer, Pierluca D'Oro, Pierre-Luc Bacon, Aaron Courville

This work identifies a common flaw of deep reinforcement learning (RL) algorithms: a tendency to rely on early interactions and ignore useful evidence encountered later.

Atari Games 100k reinforcement-learning

Simplicial Embeddings in Self-Supervised Learning and Downstream Classification

1 code implementation1 Apr 2022 Samuel Lavoie, Christos Tsirigotis, Max Schwarzer, Kenji Kawaguchi, Ankit Vani, Aaron Courville

Specifically, we show that the temperature $\tau$ of the Softmax operation controls for the SEM representation's expressivity, allowing us to derive a tighter downstream classifier generalization bound than that for classifiers using unnormalized representations.

Classification Self-Supervised Learning

Deep Reinforcement Learning at the Edge of the Statistical Precipice

1 code implementation NeurIPS 2021 Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare

Most published results on deep RL benchmarks compare point estimates of aggregate performance such as mean and median scores across tasks, ignoring the statistical uncertainty implied by the use of a finite number of training runs.


Iterated learning for emergent systematicity in VQA

no code implementations ICLR 2021 Ankit Vani, Max Schwarzer, Yuchen Lu, Eeshan Dhekane, Aaron Courville

Although neural module networks have an architectural bias towards compositionality, they require gold standard layouts to generalize systematically in practice.

Question Answering Systematic Generalization +1

Data-Efficient Reinforcement Learning with Self-Predictive Representations

1 code implementation ICLR 2021 Max Schwarzer, Ankesh Anand, Rishab Goel, R. Devon Hjelm, Aaron Courville, Philip Bachman

We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.

Atari Games 100k Data Augmentation +4

GAIT: A Geometric Approach to Information Theory

1 code implementation19 Jun 2019 Jose Gallego, Ankit Vani, Max Schwarzer, Simon Lacoste-Julien

We advocate the use of a notion of entropy that reflects the relative abundances of the symbols in an alphabet, as well as the similarities between them.

Learning to fail: Predicting fracture evolution in brittle material models using recurrent graph convolutional neural networks

no code implementations14 Oct 2018 Max Schwarzer, Bryce Rogan, Yadong Ruan, Zhengming Song, Diana Y. Lee, Allon G. Percus, Viet T. Chau, Bryan A. Moore, Esteban Rougier, Hari S. Viswanathan, Gowri Srinivasan

Our methods use deep learning and train on simulation data from high-fidelity models, emulating the results of these models while avoiding the overwhelming computational demands associated with running a statistically significant sample of simulations.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.