Search Results for author: Luyu Wang

Found 9 papers, 3 papers with code

WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset

1 code implementation NAACL (TextGraphs) 2021 Luyu Wang, Yujia Li, Ozlem Aslan, Oriol Vinyals

We present a new dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning.

Conditional Text Generation Graph Generation +2

Multi-Format Contrastive Learning of Audio Representations

no code implementations11 Mar 2021 Luyu Wang, Aaron van den Oord

Recent advances suggest the advantage of multi-modal training in comparison with single-modal methods.

Ranked #8 on Audio Classification on ESC-50 (using extra training data)

Audio Classification Contrastive Learning

Learning Robust and Multilingual Speech Representations

no code implementations Findings of the Association for Computational Linguistics 2020 Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord

Unsupervised speech representation learning has shown remarkable success at finding representations that correlate with phonetic structures and improve downstream speech recognition performance.

Representation Learning Speech Recognition

Unsupervised Learning of Efficient and Robust Speech Representations

no code implementations25 Sep 2019 Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord

We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals.

Speech Recognition

On the Sensitivity of Adversarial Robustness to Input Data Distributions

no code implementations ICLR 2019 Gavin Weiguang Ding, Kry Yik Chau Lui, Xiaomeng Jin, Luyu Wang, Ruitong Huang

Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarial trained model that is both trained and evaluated on the new distribution.

Adversarial Robustness

Automatic Selection of t-SNE Perplexity

no code implementations10 Aug 2017 Yanshuai Cao, Luyu Wang

t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimensionality reduction methods for data visualization, but it has a perplexity hyperparameter that requires manual selection.

Data Visualization Dimensionality Reduction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.