We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of Wikipedia articles in 85 languages, including several dialects or low-resource languages.
In this paper, we propose a novel style transfer architecture, which can also be extended to generate voices even for target speakers whose data were not used in the training (i. e., case of zero-shot learning).
In the experiments of pruning MobileNet V1 and ResNet-50, FNNP outperforms all compared methods by up to 3. 8%.
SOTA for Network Pruning on ImageNet
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective.
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs.
Transformers have achieved state-of-the-art results on a variety of natural language processing tasks.
Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem.
We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs.