no code implementations • ACL 2022 • Chen Zhao, Yu Su, Adam Pauls, Emmanouil Antonios Platanios
Text-to-SQL parsers map natural language questions to programs that are executable over tables to generate answers, and are typically evaluated on large-scale datasets like Spider (Yu et al., 2018).
1 code implementation • 26 May 2023 • Tianshu Zhang, Changchang Liu, Wei-Han Lee, Yu Su, Huan Sun
By leveraging data from multiple clients, the FL paradigm can be especially beneficial for clients that have little training data to develop a data-hungry neural semantic parser on their own.
no code implementations • 23 May 2023 • Shijie Chen, Ziru Chen, Huan Sun, Yu Su
Despite remarkable progress in text-to-SQL semantic parsing in recent years, the performance of existing parsers is still far from perfect.
1 code implementation • 22 May 2023 • Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, Yu Su
By providing external information to large language models (LLMs), tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs' static parametric memory.
1 code implementation • 22 May 2023 • Ziru Chen, Shijie Chen, Michael White, Raymond Mooney, Ali Payani, Jayanth Srinivasa, Yu Su, Huan Sun
Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code.
1 code implementation • 18 May 2023 • Kai Zhang, Bernal Jiménez Gutiérrez, Yu Su
Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting.
Ranked #2 on
Relation Extraction
on TACRED-Revisited
1 code implementation • 15 May 2023 • Samuel Stevens, Yu Su
Over-parameterized neural language models (LMs) can memorize and recite long sequences of training data.
1 code implementation • 10 May 2023 • Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, Huan Sun
To facilitate the evaluation, we manually curate a set of test examples covering 12 domains from a generative search engine, New Bing.
1 code implementation • 7 May 2023 • Agus Sudjianto, Aijun Zhang, Zebin Yang, Yu Su, Ningzhou Zeng
PiML (read $\pi$-ML, /`pai.`em.`el/) is an integrated and open-access Python toolbox for interpretable machine learning model development and model diagnostics.
1 code implementation • 2 May 2023 • Tianle Li, Xueguang Ma, Alex Zhuang, Yu Gu, Yu Su, Wenhu Chen
On GrailQA and WebQSP, our model is also on par with other fully-trained models.
no code implementations • 5 Apr 2023 • Shuanghong Shen, Enhong Chen, Bihan Xu, Qi Liu, Zhenya Huang, Linbo Zhu, Yu Su
In this paper, we present the Quiz-based Knowledge Tracing (QKT) model to monitor students' knowledge states according to their quiz-based learning interactions.
no code implementations • 20 Dec 2022 • FatemehSadat Mireshghallah, Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner
Task-oriented dialogue systems often assist users with personal or confidential matters.
no code implementations • 19 Dec 2022 • Vardaan Pahuja, Boshi Wang, Hugo Latapie, Jayanth Srinivasa, Yu Su
To address the limitations of existing KG link prediction frameworks, we propose a novel retrieve-and-read framework, which first retrieves a relevant subgraph context for the query and then jointly reasons over the context and the query with a high-capacity reader.
no code implementations • 19 Dec 2022 • Yu Gu, Xiang Deng, Yu Su
Most existing work for grounded language understanding uses LMs to directly generate plans that can be executed in the environment to achieve the desired effects.
no code implementations • 8 Dec 2022 • Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, Yu Su
In this work, we propose a novel method, LLM-Planner, that harnesses the power of large language models to do few-shot planning for embodied agents.
no code implementations • 25 Sep 2022 • Chenglong Li, Qiwen Zhu, Tubiao Liu, Jin Tang, Yu Su
To address this issue, we design a multi-stage convolution-transformer network for step segmentation.
no code implementations • 12 Sep 2022 • Yu Gu, Vardaan Pahuja, Gong Cheng, Yu Su
In this survey, we situate KBQA in the broader literature of semantic parsing and give a comprehensive account of how existing KBQA approaches attempt to address the unique challenges.
no code implementations • 11 Jul 2022 • Shijie Chen, Ziru Chen, Xiang Deng, Ashley Lewis, Lingbo Mo, Samuel Stevens, Zhen Wang, Xiang Yue, Tianshu Zhang, Yu Su, Huan Sun
We present TacoBot, a task-oriented dialogue system built for the inaugural Alexa Prize TaskBot Challenge, which assists users in completing multi-step cooking and home improvement tasks.
1 code implementation • 24 May 2022 • Elias Stengel-Eskin, Emmanouil Antonios Platanios, Adam Pauls, Sam Thomson, Hao Fang, Benjamin Van Durme, Jason Eisner, Yu Su
Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.
1 code implementation • COLING 2022 • Yu Gu, Yu Su
Question answering on knowledge bases (KBQA) poses a unique challenge for semantic parsing research due to two intertwined challenges: large search space and ambiguities in schema linking.
1 code implementation • 16 Mar 2022 • Bernal Jiménez Gutiérrez, Nikolas McNeal, Clay Washington, You Chen, Lang Li, Huan Sun, Yu Su
In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i. e., BERT-sized) PLMs on two highly representative biomedical information extraction tasks, named entity recognition and relation extraction.
1 code implementation • CVPR 2022 • Chan Hee Song, Jihyung Kil, Tai-Yu Pan, Brian M. Sadler, Wei-Lun Chao, Yu Su
We study the problem of developing autonomous agents that can follow human instructions to infer and perform a sequence of actions to complete the underlying task.
no code implementations • 9 Dec 2021 • Saghar Hosseini, Ahmed Hassan Awadallah, Yu Su
We define new compositional generalization tasks for NL2API which explore the models' ability to extrapolate from simple API calls in the training set to new and more complex API calls in the inference phase.
1 code implementation • EMNLP 2021 • Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, Huan Sun
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.
Ranked #1 on
Semantic Parsing
on GraphQuestions
no code implementations • SEMEVAL 2021 • Genyu Zhang, Yu Su, Changhong He, Lei Lin, Chengjie Sun, Lili Shan
This paper describes the winning system in the End-to-end Pipeline phase for the NLPContributionGraph task.
1 code implementation • ACL 2021 • Vardaan Pahuja, Yu Gu, Wenhu Chen, Mehdi Bahrami, Lei Liu, Wei-Peng Chen, Yu Su
Knowledge bases (KBs) and text often contain complementary knowledge: KBs store structured knowledge that can support long range reasoning, while text stores more comprehensive and timely knowledge in an unstructured way.
no code implementations • NAACL 2021 • Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, Jacob Andreas
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers.
no code implementations • 15 Jan 2021 • Haoyang Bi, Haiping Ma, Zhenya Huang, Yu Yin, Qi Liu, Enhong Chen, Yu Su, Shijin Wang
In this paper, we study a novel model-agnostic CAT problem, where we aim to propose a flexible framework that can adapt to different cognitive models.
no code implementations • 15 Dec 2020 • Yifeng Guo, Yu Su, Zebin Yang, Aijun Zhang
In this paper, we propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions (GAMMLI).
2 code implementations • EMNLP (BlackboxNLP) 2021 • Samuel Stevens, Yu Su
Pre-trained language models (PLMs) like BERT are being used for almost all language-related tasks, but interpreting their behavior still remains a significant challenge and many important questions remain largely unanswered.
1 code implementation • 16 Nov 2020 • Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, Yu Su
To facilitate the development of KBQA models with stronger generalization, we construct and release a new large-scale, high-quality dataset with 64, 331 questions, GrailQA, and provide evaluation settings for all three levels of generalization.
no code implementations • 21 Oct 2020 • Shutang You, Yilu Liu, Hongyu Li, Shengyuan Liu, Kaiqi Sun, Yinfeng Zhao, Huangqing Xiao, Jiaojiao Dong, Yu Su, Weikang Wang, Yi Cui
Power grid data are going big with the deployment of various sensors.
no code implementations • 10 Oct 2020 • Yao Wang, Yu Su, Rui-Xue Xu, Xiao Zheng, YiJing Yan
In this work, on the basis of the thermodynamic solvation potentials analysis, we reexamine Marcus' formula with respect to the Rice-Ramsperger-Kassel-Marcus (RRKM) theory.
Chemical Physics
1 code implementation • EMNLP 2020 • Wenhu Chen, Yu Su, Xifeng Yan, William Yang Wang
We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.
Ranked #9 on
KG-to-Text Generation
on WebNLG 2.0 (Unconstrained)
1 code implementation • 24 Sep 2020 • Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, Alexander Zotov
We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph.
1 code implementation • NLP-COVID19 (ACL) 2020 • Bernal Jiménez Gutiérrez, Juncheng Zeng, Dong-dong Zhang, Ping Zhang, Yu Su
The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields.
1 code implementation • EMNLP 2020 • Ziyu Yao, Yiqi Tang, Wen-tau Yih, Huan Sun, Yu Su
Despite the widely successful applications, bootstrapping and fine-tuning semantic parsers are still a tedious process with challenges such as costly data annotation and privacy risks.
1 code implementation • ACL 2020 • Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, William Yang Wang
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset \cite{chen2019tabfact} featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w. r. t.\ logical inference.
no code implementations • 27 Nov 2019 • Keke Tang, Peng Song, Yuexin Ma, Zhaoquan Gu, Yu Su, Zhihong Tian, Wenping Wang
High-level (e. g., semantic) features encoded in the latter layers of convolutional neural networks are extensively exploited for image classification, leaving low-level (e. g., color) features in the early layers underexplored.
2 code implementations • IJCNLP 2019 • Ziyu Yao, Yu Su, Huan Sun, Wen-tau Yih
As a promising paradigm, interactive semantic parsing has shown to improve both semantic parsing accuracy and user confidence in the results.
1 code implementation • 7 Jun 2019 • Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, Guoping Hu
In EERNN, we simply summarize each student's state into an integrated vector and trace it with a recurrent neural network, where we design a bidirectional LSTM to learn the encoding of each exercise's content.
1 code implementation • ACL 2019 • Zhiyu Chen, Hanwen Zha, Honglei Liu, Wenhu Chen, Xifeng Yan, Yu Su
Pre-trained embeddings such as word embeddings and sentence embeddings are fundamental tools facilitating a wide range of downstream NLP tasks.
Ranked #120 on
Action Classification
on Kinetics-400
no code implementations • 27 May 2019 • Yu Yin, Qi Liu, Zhenya Huang, Enhong Chen, Wei Tong, Shijin Wang, Yu Su
Then we propose a two-level hierarchical pre-training algorithm to learn better understanding of test questions in an unsupervised way.
1 code implementation • NAACL 2019 • Wenhu Chen, Yu Su, Yilin Shen, Zhiyu Chen, Xifeng Yan, William Wang
Under deep neural networks, a pre-defined vocabulary is required to vectorize text inputs.
no code implementations • 7 Nov 2018 • Xin Wang, Jiawei Wu, Da Zhang, Yu Su, William Yang Wang
Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios.
no code implementations • EMNLP 2018 • Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan
The SQL queries in WikiSQL are simple: Each involves one relation and does not have any join operation.
1 code implementation • EMNLP 2018 • Wenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, William Yang Wang
Then, we pre-train a state tracker for the source language as a teacher, which is able to exploit easy-to-access parallel data.
no code implementations • ACL 2018 • Izzeddin Gur, Semih Yavuz, Yu Su, Xifeng Yan
The recent advance in deep learning and semantic parsing has significantly improved the translation accuracy of natural language questions to structured queries.
no code implementations • 1 Jan 2018 • Farzin Ghorban, Javier Marín, Yu Su, Alessandro Colombo, Anton Kummert
Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware.
no code implementations • EMNLP 2017 • Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan
The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes.
no code implementations • EMNLP 2017 • Jie Zhao, Yu Su, Ziyu Guan, Huan Sun
Given a question and a set of answer candidates, answer triggering determines whether the candidate set contains any correct answers.
1 code implementation • EMNLP 2017 • Yu Su, Xifeng Yan
Existing studies on semantic parsing mainly focus on the in-domain setting.
1 code implementation • NAACL 2018 • Yu Su, Honglei Liu, Semih Yavuz, Izzeddin Gur, Huan Sun, Xifeng Yan
We study the problem of textual relation embedding with distant supervision.