no code implementations • ACL (RepL4NLP) 2021 • Raghuveer Thirukovalluru, Mukund Sridhar, Dung Thai, Shruti Chanumolu, Nicholas Monath, Sankaranarayanan Ananthakrishnan, Andrew McCallum
Specially, neural semantic parsers (NSPs) effectively translate natural questions to logical forms, which execute on KB and give desirable answers.
no code implementations • ACL (RepL4NLP) 2021 • Dung Thai, Raghuveer Thirukovalluru, Trapit Bansal, Andrew McCallum
In this work, we aim at directly learning text representations which leverage structured knowledge about entities mentioned in the text.
no code implementations • 24 May 2023 • Dung Thai, Dhruv Agarwal, Mudit Chaudhary, Wenlong Zhao, Rajarshi Das, Manzil Zaheer, Jay-Yoon Lee, Hannaneh Hajishirzi, Andrew McCallum
Given a test question, CBR-MRC first retrieves a set of similar cases from a nonparametric memory and then predicts an answer by selecting the span in the test context that is most similar to the contextualized representations of answers in the retrieved cases.
no code implementations • 18 Apr 2022 • Dung Thai, Srinivas Ravishankar, Ibrahim Abdelaziz, Mudit Chaudhary, Nandana Mihindukulasooriya, Tahira Naseem, Rajarshi Das, Pavan Kapanipathi, Achille Fokoue, Andrew McCallum
Yet, in many question answering applications coupled with knowledge bases, the sparse nature of KBs is often overlooked.
2 code implementations • NAACL 2021 • Hiroshi Iida, Dung Thai, Varun Manjunatha, Mohit Iyyer
Existing work on tabular representation learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT.
Ranked #1 on
Column Type Annotation
on VizNet-Sato-Full
(Weighted-F1 metric)
no code implementations • EMNLP 2021 • Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay-Yoon Lee, Lizhen Tan, Lazaros Polymenakos, Andrew McCallum
It is often challenging to solve a complex problem from scratch, but much easier if we can access other similar problems with their solutions -- a paradigm known as case-based reasoning (CBR).
Knowledge Base Question Answering
Natural Language Queries
+1
1 code implementation • AKBC 2020 • Dung Thai, Zhiyang Xu, Nicholas Monath, Boris Veytsman, Andrew McCallum
In this paper, we describe a technique for using BibTeX to generate, automatically, a large-scale 41M labeled strings), labeled dataset, that is four orders of magnitude larger than the current largest CFE dataset, namely the UMass Citation Field Extraction dataset [Anzaroot and McCallum, 2013].
no code implementations • CONLL 2018 • Dung Thai, Sree Harsha Ramesh, Shikhar Murty, Luke Vilnis, Andrew McCallum
Complex textual information extraction tasks are often posed as sequence labeling or \emph{shallow parsing}, where fields are extracted using local labels made consistent through probabilistic inference in a graphical model with constrained transitions.
no code implementations • ICLR 2018 • Mikhail Yurochkin, Dung Thai, Hung Hai Bui, XuanLong Nguyen
In this work we propose a novel approach for learning graph representation of the data using gradients obtained via backpropagation.
no code implementations • 2 Aug 2017 • Dung Thai, Shikhar Murty, Trapit Bansal, Luke Vilnis, David Belanger, Andrew McCallum
In textual information extraction and other sequence labeling tasks it is now common to use recurrent neural networks (such as LSTM) to form rich embedded representations of long-term input co-occurrence patterns.