Search Results for author: Yi Chern Tan

Found 7 papers, 7 papers with code

GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing

1 code implementation ICLR 2021 Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Bailin Wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, Richard Socher, Caiming Xiong

We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data.

Inductive Bias Language Modelling +3

ESPRIT: Explaining Solutions to Physical Reasoning Tasks

2 code implementations ACL 2020 Nazneen Fatema Rajani, Rui Zhang, Yi Chern Tan, Stephan Zheng, Jeremy Weiss, Aadit Vyas, Abhijit Gupta, Caiming Xiong, Richard Socher, Dragomir Radev

Our framework learns to generate explanations of how the physical simulation will causally evolve so that an agent or a human can easily reason about a solution using those interpretable descriptions.

Assessing Social and Intersectional Biases in Contextualized Word Representations

1 code implementation NeurIPS 2019 Yi Chern Tan, L. Elisa Celis

In this paper, we analyze the extent to which state-of-the-art models for contextual word representations, such as BERT and GPT-2, encode biases with respect to gender, race, and intersectional identities.

Fairness Sentence +1

SParC: Cross-Domain Semantic Parsing in Context

4 code implementations ACL 2019 Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, Dragomir Radev

The best model obtains an exact match accuracy of 20. 2% over all questions and less than10% over all interaction sequences, indicating that the cross-domain setting and the con-textual phenomena of the dataset present significant challenges for future research.

Semantic Parsing Text-To-SQL

Open Sesame: Getting Inside BERT's Linguistic Knowledge

1 code implementation WS 2019 Yongjie Lin, Yi Chern Tan, Robert Frank

How and to what extent does BERT encode syntactically-sensitive hierarchical information or positionally-sensitive linear information?

Cannot find the paper you are looking for? You can Submit a new open access paper.