Search Results for author: Shuyang Gao

Found 22 papers, 12 papers with code

Context-Situated Pun Generation

1 code implementation24 Oct 2022 Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, Nanyun Peng

In this work, we propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided, and the task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words.

Retrieval

Towards Textual Out-of-Domain Detection without In-Domain Labels

no code implementations22 Mar 2022 Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur

In many real-world settings, machine learning models need to identify user inputs that are out-of-domain (OOD) so as to avoid performing wrong actions.

Contrastive Learning intent-classification +4

MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension

2 code implementations1 Oct 2019 Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, Dilek Hakkani-Tur

Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language.

Logical Reasoning Machine Reading Comprehension +3

Dialog State Tracking: A Neural Reading Comprehension Approach

no code implementations WS 2019 Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, Dilek Hakkani-Tur

In contrast to traditional state tracking methods where the dialog state is often predicted as a distribution over a closed set of all the possible slot values within an ontology, our method uses a simple attention-based neural network to point to the slot values within the conversation.

dialog state tracking Machine Reading Comprehension +2

Invariant Representations without Adversarial Training

1 code implementation NeurIPS 2018 Daniel Moyer, Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, Aram Galstyan

Representations of data that are invariant to changes in specified factors are useful for a wide range of problems: removing potential biases in prediction problems, controlling the effects of covariates, and disentangling meaningful factors of variation.

Representation Learning

Modeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic Approach

no code implementations26 Apr 2018 Sahil Garg, Irina Rish, Guillermo Cecchi, Palash Goyal, Sarik Ghazarian, Shuyang Gao, Greg Ver Steeg, Aram Galstyan

We also derive a novel lower bound on mutual information, used as a model-selection criterion favoring representations with better alignment between the utterances of participants in a collaborative dialogue setting, as well as higher predictability of the generated responses.

Computational Efficiency Dialogue Generation +1

Auto-Encoding Total Correlation Explanation

no code implementations16 Feb 2018 Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, Aram Galstyan

Advances in unsupervised learning enable reconstruction and generation of samples from complex distributions, but this success is marred by the inscrutability of the representations learned.

Disentanglement

Kernelized Hashcode Representations for Relation Extraction

1 code implementation10 Nov 2017 Sahil Garg, Aram Galstyan, Greg Ver Steeg, Irina Rish, Guillermo Cecchi, Shuyang Gao

Here we propose to use random subspaces of KLSH codes for efficiently constructing an explicit representation of NLP structures suitable for general classification methods.

General Classification Relation +1

Variational Information Maximization for Feature Selection

1 code implementation NeurIPS 2016 Shuyang Gao, Greg Ver Steeg, Aram Galstyan

We demonstrate that approximations made by existing methods are based on unrealistic assumptions.

feature selection

Sifting Common Information from Many Variables

1 code implementation7 Jun 2016 Greg Ver Steeg, Shuyang Gao, Kyle Reing, Aram Galstyan

Measuring the relationship between any pair of variables is a rich and active area of research that is central to scientific practice.

blind source separation Dimensionality Reduction

The DARPA Twitter Bot Challenge

no code implementations20 Jan 2016 V. S. Subrahmanian, Amos Azaria, Skylar Durst, Vadim Kagan, Aram Galstyan, Kristina Lerman, Linhong Zhu, Emilio Ferrara, Alessandro Flammini, Filippo Menczer, Andrew Stevens, Alexander Dekhtyar, Shuyang Gao, Tad Hogg, Farshad Kooti, Yan Liu, Onur Varol, Prashant Shiralkar, Vinod Vydiswaran, Qiaozhu Mei, Tim Hwang

A number of organizations ranging from terrorist groups such as ISIS to politicians and nation states reportedly conduct explicit campaigns to influence opinion on social media, posing a risk to democratic processes.

Understanding confounding effects in linguistic coordination: an information-theoretic approach

no code implementations1 Dec 2014 Shuyang Gao, Greg Ver Steeg, Aram Galstyan

We revisit some of the previous studies that reported strong signatures of stylistic accommodation, and find that a significant part of the observed coordination can be attributed to a simple confounding effect - length coordination.

Efficient Estimation of Mutual Information for Strongly Dependent Variables

4 code implementations7 Nov 2014 Shuyang Gao, Greg Ver Steeg, Aram Galstyan

We demonstrate that a popular class of nonparametric mutual information (MI) estimators based on k-nearest-neighbor graphs requires number of samples that scales exponentially with the true MI.

Cannot find the paper you are looking for? You can Submit a new open access paper.