Search Results for author: Ruiqi Zhong

Found 17 papers, 11 papers with code

Active Programming by Example with a Natural Language Prior

no code implementations25 May 2022 Ruiqi Zhong, Charlie Snell, Dan Klein, Jason Eisner

We introduce APEL, a new framework that enables non-programmers to indirectly annotate natural language utterances with executable meaning representations, such as SQL programs.

Bayesian Inference

InCoder: A Generative Model for Code Infilling and Synthesis

1 code implementation12 Apr 2022 Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis

Our model is the first generative model that is able to directly perform zero-shot code infilling, which we evaluate on challenging tasks such as type inference, comment generation, and variable re-naming.

Program Synthesis

Describing Differences between Text Distributions with Natural Language

1 code implementation28 Jan 2022 Ruiqi Zhong, Charlie Snell, Dan Klein, Jacob Steinhardt

We then re-rank the descriptions by checking how often they hold on a larger set of samples with a learned verifier.

Re-Ranking

The Effect of Model Size on Worst-Group Generalization

no code implementations8 Dec 2021 Alan Pham, Eunice Chan, Vikranth Srivatsa, Dhruba Ghosh, Yaoqing Yang, Yaodong Yu, Ruiqi Zhong, Joseph E. Gonzalez, Jacob Steinhardt

Overparameterization is shown to result in poor test accuracy on rare subgroups under a variety of settings where subgroup information is known.

Natural Language Processing

Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

1 code implementation Findings (ACL) 2021 Ruiqi Zhong, Dhruba Ghosh, Dan Klein, Jacob Steinhardt

We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%.

Pretrained Language Models

Approximating How Single Head Attention Learns

1 code implementation13 Mar 2021 Charlie Snell, Ruiqi Zhong, Dan Klein, Jacob Steinhardt

Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness.

Semantic Evaluation for Text-to-SQL with Distilled Test Suites

3 code implementations EMNLP 2020 Ruiqi Zhong, Tao Yu, Dan Klein

We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models.

Text-To-Sql

Understanding Attention Training via Output Relevance

no code implementations16 Aug 2020 Charlie Snell, Ruiqi Zhong, Jacob Steinhardt, Dan Klein

If we ablate attention by fixing it to uniform, the output relevance still correlates with the attention of a normally trained model; but if we instead ablate output relevance, attention cannot be learned.

Translation

Semantic Evaluation for Text-to-SQL with Distilled Test Suite

no code implementations2 Jul 2020 Ruiqi Zhong, Tao Yu, Dan Klein

We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models, where a predicted query is semantically correct if its denotation is the same as the gold for every possible database.

Semantic Parsing Text-To-Sql

Semantic Scaffolds for Pseudocode-to-Code Generation

1 code implementation ACL 2020 Ruiqi Zhong, Mitchell Stern, Dan Klein

We propose a method for program generation based on semantic scaffolds, lightweight structures representing the high-level semantic and syntactic composition of a program.

Code Generation

Detecting and Reducing Bias in a High Stakes Domain

1 code implementation IJCNLP 2019 Ruiqi Zhong, Yanda Chen, Desmond Patton, Charlotte Selous, Kathy Mckeown

Gang-involved youth in cities such as Chicago sometimes post on social media to express their aggression towards rival gangs and previous research has demonstrated that a deep learning approach can predict aggression and loss in posts.

Fine-grained Sentiment Analysis with Faithful Attention

no code implementations19 Aug 2019 Ruiqi Zhong, Steven Shao, Kathleen McKeown

While the general task of textual sentiment classification has been widely studied, much less research looks specifically at sentiment between a specified source and target.

Relation Extraction Sentiment Analysis

Detecting Gang-Involved Escalation on Social Media Using Context

1 code implementation EMNLP 2018 Serina Chang, Ruiqi Zhong, Ethan Adams, Fei-Tzin Lee, Siddharth Varia, Desmond Patton, William Frey, Chris Kedzie, Kathleen McKeown

Gang-involved youth in cities such as Chicago have increasingly turned to social media to post about their experiences and intents online.

Subspace Embedding and Linear Regression with Orlicz Norm

no code implementations ICML 2018 Alexandr Andoni, Chengyu Lin, Ying Sheng, Peilin Zhong, Ruiqi Zhong

An Orlicz norm is parameterized by a non-negative convex function $G:\mathbb{R}_+\rightarrow\mathbb{R}_+$ with $G(0)=0$: the Orlicz norm of a vector $x\in\mathbb{R}^n$ is defined as $ \|x\|_G=\inf\left\{\alpha>0\large\mid\sum_{i=1}^n G(|x_i|/\alpha)\leq 1\right\}.

Cannot find the paper you are looking for? You can Submit a new open access paper.