Physics-based Deep Learning

thunil/Physics-Based-Deep-Learning 11 Sep 2021

This digital book contains a practical and comprehensive introduction of everything related to deep learning in the context of physical simulations.

Physical Simulations

TWEETS

Conservative Data Sharing for Multi-Task Offline Reinforcement Learning

no code yet • 16 Sep 2021

We argue that a natural use case of offline RL is in settings where we can pool large amounts of data collected in various scenarios for solving different tasks, and utilize all of this data to learn behaviors for all the tasks more effectively rather than training each one in isolation.

Offline RL

TWEETS

Rationales for Sequential Predictions

keyonvafa/sequential-rationales 14 Sep 2021

Compared to existing baselines, greedy rationalization is best at optimizing the combinatorial objective and provides the most faithful rationales.

Combinatorial Optimization Language Modelling +1

TWEETS

Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?

no code yet • 12 Sep 2021

Specifically, we replace the MLP module in the token-mixing step with a novel sparse MLP (sMLP) module.

Image Classification

TWEETS

LM-Critic: Language Models for Unsupervised Grammatical Error Correction

michiyasunaga/LM-Critic 14 Sep 2021

Training a model for grammatical error correction (GEC) requires a set of labeled ungrammatical / grammatical sentence pairs, but manually annotating such pairs can be expensive.

Grammatical Error Correction Language Modelling

TWEETS

Geographic Difference-in-Discontinuities

no code yet • 15 Sep 2021

A recent econometric literature has critiqued the use of regression discontinuities where administrative borders serves as the 'cutoff'.

TWEETS

Illuminating Diverse Neural Cellular Automata for Level Generation

smearle/gym-pcgrl 12 Sep 2021

We present a method of generating a collection of neural cellular automata (NCA) to design video game levels.

TWEETS

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

google-research/google-research 13 Sep 2021

Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available.

Few-Shot Learning

TWEETS

Learning Mathematical Properties of Integers

no code yet • 15 Sep 2021

Embedding words in high-dimensional vector spaces has proven valuable in many natural language applications.

TWEETS

Challenges in Detoxifying Language Models

no code yet • 15 Sep 2021

Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks.

TWEETS