ICLR 2020

WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia

ICLR 2020 facebookresearch/LASER

We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of Wikipedia articles in 85 languages, including several dialects or low-resource languages.

SENTENCE EMBEDDINGS

AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion

ICLR 2020 liusongxiang/StarGAN-Voice-Conversion

In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i. e., case of zero-shot learning).

STYLE TRANSFER VOICE CONVERSION ZERO-SHOT LEARNING

FNNP: Fast Neural Network Pruning Using Adaptive Batch Normalization

ICLR 2020 anonymous47823493/FNNP

In the experiments of pruning MobileNet V1 and ResNet-50, FNNP outperforms all compared methods by up to 3. 8%.

NETWORK PRUNING

Network Deconvolution

ICLR 2020 deconvolutionpaper/deconvolution

Filtering with such kernels results in a sparse representation, a desired property that has been missing in the training of neural networks.

Interpretations are useful: penalizing explanations to align neural networks with prior knowledge

ICLR 2020 laura-rieger/deep-explanation-penalization

For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective.

GraphSAINT: Graph Sampling Based Inductive Learning Method

ICLR 2020 GraphSAINT/GraphSAINT

Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs.

MUSE: Multi-Scale Attention Model for Sequence to Sequence Learning

ICLR 2020 lancopku/MUSE

Transformers have achieved state-of-the-art results on a variety of natural language processing tasks.

MACHINE TRANSLATION

Understanding and Robustifying Differentiable Architecture Search

ICLR 2020 MetaAnonym/RobustDARTS

Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem.

DISPARITY ESTIMATION IMAGE CLASSIFICATION LANGUAGE MODELLING REGRESSION

Black-box Adversarial Attacks with Bayesian Optimization

ICLR 2020 snu-mllab/parsimonious-blackbox-attack

We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs.

Good Semi-supervised VAE Requires Tighter Evidence Lower Bound

ICLR 2020 PaperCodeSubmission/OSPOT-VAE

(2) Good semi-supervised learning results and good generative performance can not be obtained at the same time.