Search Results for author: Yanshuai Cao

Found 21 papers, 9 papers with code

Hierarchical Neural Data Synthesis for Semantic Parsing

no code implementations4 Dec 2021 Wei Yang, Peng Xu, Yanshuai Cao

Moreover, even the questions pertinent to a given domain, which are the input of a semantic parsing system, might not be readily available, especially in cross-domain semantic parsing.

Data Augmentation Semantic Parsing +1

Code Generation from Natural Language with Less Prior Knowledge and More Monolingual Data

1 code implementation ACL 2021 Sajad Norouzi, Keyi Tang, Yanshuai Cao

Training datasets for semantic parsing are typically small due to the higher expertise required for annotation than most other NLP tasks.

Code Generation Inductive Bias +1

A Globally Normalized Neural Model for Semantic Parsing

no code implementations ACL (spnlp) 2021 Chenyang Huang, Wei Yang, Yanshuai Cao, Osmar Zaïane, Lili Mou

In this paper, we propose a globally normalized model for context-free grammar (CFG)-based semantic parsing.

Semantic Parsing

Code Generation from Natural Language with Less Prior and More Monolingual Data

1 code implementation1 Jan 2021 Sajad Norouzi, Keyi Tang, Yanshuai Cao

Training datasets for semantic parsing are typically small due to the higher expertise required for annotation than most other NLP tasks.

 Ranked #1 on Code Generation on Django (using extra training data)

Code Generation Inductive Bias +1

Optimizing Deeper Transformers on Small Datasets

1 code implementation ACL 2021 Peng Xu, Dhruv Kumar, Wei Yang, Wenjie Zi, Keyi Tang, Chenyang Huang, Jackie Chi Kit Cheung, Simon J. D. Prince, Yanshuai Cao

This work shows that this does not always need to be the case: with proper initialization and optimization, the benefits of very deep transformers can carry over to challenging tasks with small datasets, including Text-to-SQL semantic parsing and logical reading comprehension.

Reading Comprehension Semantic Parsing +2

Evaluating Lossy Compression Rates of Deep Generative Models

2 code implementations ICML 2020 Sicong Huang, Alireza Makhzani, Yanshuai Cao, Roger Grosse

The field of deep generative modeling has succeeded in producing astonishingly realistic-seeming images and audio, but quantitative evaluation remains a challenge.

Variational Hyper RNN for Sequence Modeling

no code implementations24 Feb 2020 Ruizhi Deng, Yanshuai Cao, Bo Chang, Leonid Sigal, Greg Mori, Marcus A. Brubaker

In this work, we propose a novel probabilistic sequence model that excels at capturing high variability in time series data, both across sequences and within an individual sequence.

Time Series

On Posterior Collapse and Encoder Feature Dispersion in Sequence VAEs

no code implementations10 Nov 2019 Teng Long, Yanshuai Cao, Jackie Chi Kit Cheung

Variational autoencoders (VAEs) hold great potential for modelling text, as they could in theory separate high-level semantic and syntactic properties from local regularities of natural language.

Language Modelling

On Variational Learning of Controllable Representations for Text without Supervision

1 code implementation ICML 2020 Peng Xu, Jackie Chi Kit Cheung, Yanshuai Cao

The variational autoencoder (VAE) can learn the manifold of natural images on certain datasets, as evidenced by meaningful interpolating or extrapolating in the continuous latent space.

Style Transfer Text Style Transfer

Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer

1 code implementation28 May 2019 Yanshuai Cao, Peng Xu

In this work, we develop a novel regularizer to improve the learning of long-range dependency of sequence data.

General Classification Inductive Bias +2

Few-Shot Self Reminder to Overcome Catastrophic Forgetting

no code implementations3 Dec 2018 Junfeng Wen, Yanshuai Cao, Ruitong Huang

We demonstrate the superiority of our method to the previous ones in two different continual learning settings on popular benchmarks, as well as a new continual learning problem where tasks are designed to be more dissimilar.

Continual Learning

Implicit Manifold Learning on Generative Adversarial Networks

no code implementations30 Oct 2017 Kry Yik Chau Lui, Yanshuai Cao, Maxime Gazeau, Kelvin Shuangjian Zhang

This paper raises an implicit manifold learning perspective in Generative Adversarial Networks (GANs), by studying how the support of the learned distribution, modelled as a submanifold $\mathcal{M}_{\theta}$, perfectly match with $\mathcal{M}_{r}$, the support of the real data distribution.

Automatic Selection of t-SNE Perplexity

no code implementations10 Aug 2017 Yanshuai Cao, Luyu Wang

t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimensionality reduction methods for data visualization, but it has a perplexity hyperparameter that requires manual selection.

Data Visualization Dimensionality Reduction +1

Transductive Log Opinion Pool of Gaussian Process Experts

no code implementations24 Nov 2015 Yanshuai Cao, David J. Fleet

We introduce a framework for analyzing transductive combination of Gaussian process (GP) experts, where independently trained GP experts are combined in a way that depends on test point location, in order to scale GPs to big data.

Adversarial Manipulation of Deep Representations

2 code implementations16 Nov 2015 Sara Sabour, Yanshuai Cao, Fartash Faghri, David J. Fleet

We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image.

Generalized Product of Experts for Automatic and Principled Fusion of Gaussian Process Predictions

no code implementations28 Oct 2014 Yanshuai Cao, David J. Fleet

In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models.

Gaussian Processes

Efficient Optimization for Sparse Gaussian Process Regression

no code implementations NeurIPS 2013 Yanshuai Cao, Marcus A. Brubaker, David J. Fleet, Aaron Hertzmann

We propose an efficient optimization algorithm for selecting a subset of training data to induce sparsity for Gaussian process regression.

Cannot find the paper you are looking for? You can Submit a new open access paper.