Search Results for author: Dung Thai

Found 10 papers, 2 papers with code

Machine Reading Comprehension using Case-based Reasoning

no code implementations24 May 2023 Dung Thai, Dhruv Agarwal, Mudit Chaudhary, Wenlong Zhao, Rajarshi Das, Manzil Zaheer, Jay-Yoon Lee, Hannaneh Hajishirzi, Andrew McCallum

Given a test question, CBR-MRC first retrieves a set of similar cases from a nonparametric memory and then predicts an answer by selecting the span in the test context that is most similar to the contextualized representations of answers in the retrieved cases.

Attribute Machine Reading Comprehension

TABBIE: Pretrained Representations of Tabular Data

2 code implementations NAACL 2021 Hiroshi Iida, Dung Thai, Varun Manjunatha, Mohit Iyyer

Existing work on tabular representation learning jointly models tables and associated text using self-supervised objective functions derived from pretrained language models such as BERT.

 Ranked #1 on Column Type Annotation on VizNet-Sato-Full (Weighted-F1 metric)

Cell Detection Column Type Annotation +1

Using BibTeX to Automatically Generate Labeled Data for Citation Field Extraction

1 code implementation AKBC 2020 Dung Thai, Zhiyang Xu, Nicholas Monath, Boris Veytsman, Andrew McCallum

In this paper, we describe a technique for using BibTeX to generate, automatically, a large-scale 41M labeled strings), labeled dataset, that is four orders of magnitude larger than the current largest CFE dataset, namely the UMass Citation Field Extraction dataset [Anzaroot and McCallum, 2013].

Management

Embedded-State Latent Conditional Random Fields for Sequence Labeling

no code implementations CONLL 2018 Dung Thai, Sree Harsha Ramesh, Shikhar Murty, Luke Vilnis, Andrew McCallum

Complex textual information extraction tasks are often posed as sequence labeling or \emph{shallow parsing}, where fields are extracted using local labels made consistent through probabilistic inference in a graphical model with constrained transitions.

UPS: optimizing Undirected Positive Sparse graph for neural graph filtering

no code implementations ICLR 2018 Mikhail Yurochkin, Dung Thai, Hung Hai Bui, XuanLong Nguyen

In this work we propose a novel approach for learning graph representation of the data using gradients obtained via backpropagation.

Low-Rank Hidden State Embeddings for Viterbi Sequence Labeling

no code implementations2 Aug 2017 Dung Thai, Shikhar Murty, Trapit Bansal, Luke Vilnis, David Belanger, Andrew McCallum

In textual information extraction and other sequence labeling tasks it is now common to use recurrent neural networks (such as LSTM) to form rich embedded representations of long-term input co-occurrence patterns.

named-entity-recognition Named Entity Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.