no code implementations • ACL 2022 • Chen Yu, Daniel Gildea
AMR parsing is the task that maps a sentence to an AMR semantic graph automatically.
no code implementations • ACL 2022 • Lisa Jin, Daniel Gildea
A common way to combat exposure bias is by applying scores from evaluation metrics as rewards in reinforcement learning (RL).
no code implementations • CL (ACL) 2020 • Daniel Gildea
Weighted deduction systems provide a framework for describing parsing algorithms that can be used with a variety of operations for combining the values of partial derivations.
no code implementations • 8 Nov 2022 • Chen Yu, Daniel Gildea
AMR parsing is the task that maps a sentence to an AMR semantic graph automatically.
1 code implementation • 22 Jun 2022 • Lisa Jin, Linfeng Song, Lifeng Jin, Dong Yu, Daniel Gildea
HCT (i) tags the source string with token-level edit actions and slotted rules and (ii) fills in the resulting rule slots with spans from the dialogue context.
no code implementations • 27 Aug 2021 • Lisa Jin, Daniel Gildea
Graph encoders in AMR-to-text generation models often rely on neighborhood convolutions or global vertex attention.
no code implementations • 27 Aug 2021 • Lisa Jin, Daniel Gildea
Text generation from AMR requires mapping a semantic graph to a string that it annotates.
no code implementations • NAACL 2021 • Parker Riley, Daniel Gildea
We show that a general algorithm for efficient computation of outside values under the minimum of superior functions framework proposed by Knuth (1977) would yield a sub-exponential time algorithm for SAT, violating the Strong Exponential Time Hypothesis (SETH).
no code implementations • COLING 2020 • Lisa Jin, Daniel Gildea
Instead of feeding shortest paths to the vertex self-attention module, we train a model to learn them using generalized shortest-paths algorithms.
no code implementations • WS 2020 • Esma Balkir, Daniel Gildea, Shay Cohen
Semiring parsing is an elegant framework for describing parsers by using semiring weighted logic programs.
no code implementations • 31 Jan 2020 • Parker Riley, Daniel Gildea
Recent embedding-based methods in unsupervised bilingual lexicon induction have shown good results, but generally have not leveraged orthographic (spelling) information, which can be helpful for pairs of related languages.
no code implementations • 3 Dec 2019 • Lisa Jin, Daniel Gildea
To enforce a sentence-aligned graph traversal and provide local graph context, we predict transition-based parser actions in addition to English words.
1 code implementation • IJCNLP 2019 • Linfeng Song, Yue Zhang, Daniel Gildea, Mo Yu, Zhiguo Wang, Jinsong Su
Medical relation extraction discovers relations between entity mentions in text, such as research articles.
no code implementations • CL 2019 • Daniel Gildea, Giorgio Satta, Xiaochang Peng
Our algorithms are based on finding a tree decomposition of smallest width, relative to the vertex order, and then extracting one rule for each node in this structure.
2 code implementations • ACL 2019 • Linfeng Song, Daniel Gildea
Evaluating AMR parsing accuracy involves comparing pairs of AMR graphs.
Ranked #3 on
Graph Matching
on RARE
no code implementations • 21 May 2019 • Md. Iftekhar Tanveer, Md. Kamrul Hasan, Daniel Gildea, M. Ehsan Hoque
Automated prediction of public speaking performance enables novel systems for tutoring public speaking skills.
no code implementations • 21 May 2019 • Md. Iftekhar Tanveer, Md Kamrul Hassan, Daniel Gildea, M. Ehsan Hoque
We use the largest open repository of public speaking---TED Talks---to predict the ratings of the online viewers.
1 code implementation • TACL 2019 • Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, Jinsong Su
It is intuitive that semantic representations can be useful for machine translation, mainly because they can help in enforcing meaning preservation and handling data sparsity (many sentences correspond to one meaning) of machine translation models.
no code implementations • WS 2018 • Linfeng Song, Yue Zhang, Daniel Gildea
The task of linearization is to find a grammatical order given a set of words.
no code implementations • EMNLP 2018 • Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea
Cross-sentence $n$-ary relation extraction detects relations among $n$ entities across multiple sentences.
no code implementations • 6 Sep 2018 • Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, Daniel Gildea
Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question.
Ranked #2 on
Question Answering
on COMPLEXQUESTIONS
no code implementations • CL 2018 • Iftekhar Naim, Parker Riley, Daniel Gildea
The existing decipherment models, however, are not well suited for exploiting these orthographic similarities.
2 code implementations • 28 Aug 2018 • Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea
Cross-sentence $n$-ary relation extraction detects relations among $n$ entities across multiple sentences.
1 code implementation • ACL 2018 • Xiaochang Peng, Linfeng Song, Daniel Gildea, Giorgio Satta
In this paper, we present a sequence-to-sequence based approach for mapping natural language sentences to AMR semantic graphs.
no code implementations • ACL 2018 • Parker Riley, Daniel Gildea
Recent embedding-based methods in bilingual lexicon induction show good results, but do not take advantage of orthographic features, such as edit distance, which can be helpful for pairs of related languages.
1 code implementation • NAACL 2018 • Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, Daniel Gildea
The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer.
Ranked #11 on
Question Generation
on SQuAD1.1
1 code implementation • ACL 2018 • Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea
The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph.
Ranked #1 on
Graph-to-Sequence
on LDC2015E86:
(using extra training data)
no code implementations • CL 2018 • David Chiang, Frank Drewes, Daniel Gildea, Adam Lopez, Giorgio Satta
Graphs have a variety of uses in natural language processing, particularly as representations of linguistic meaning.
no code implementations • CL 2018 • Mehdi Manshadi, Daniel Gildea, James F. Allen
The general problem of finding satisfying solutions to constraint-based underspecified representations of quantifier scope is NP-complete.
no code implementations • CL 2018 • Daniel Gildea, Giorgio Satta, Xiaochang Peng
Motivated by the task of semantic parsing, we describe a transition system that generalizes standard transition-based dependency parsing techniques to generate a graph rather than a tree.
no code implementations • EACL 2017 • Xiaochang Peng, Chuan Wang, Daniel Gildea, Nianwen Xue
Neural attention models have achieved great success in different NLP tasks.
no code implementations • ACL 2017 • Linfeng Song, Xiaochang Peng, Yue Zhang, Zhiguo Wang, Daniel Gildea
This paper addresses the task of AMR-to-text generation by leveraging synchronous node replacement grammar.
no code implementations • EMNLP 2016 • Linfeng Song, Yue Zhang, Xiaochang Peng, Zhiguo Wang, Daniel Gildea
The task of AMR-to-text generation is to generate grammatical text that sustains the semantic meaning for a given AMR graph.
no code implementations • 21 Jul 2016 • Xiaochang Peng, Daniel Gildea
In this paper, we introduce a variation of the skip-gram model which jointly learns distributed word vector representations and their way of composing to form phrase embeddings.
no code implementations • SEMEVAL 2016 • Linfeng Song, Zhiguo Wang, Haitao Mi, Daniel Gildea
In the training stage, our method induces several sense centroids (embedding) for each polysemous word.
Ranked #4 on
Word Sense Induction
on SemEval 2010 WSI
no code implementations • 9 Oct 2015 • Daniel Gildea, T. Florian Jaeger
Most languages use the relative order between words to encode meaning relations.
no code implementations • 10 Aug 2015 • Iftekhar Naim, Daniel Gildea
Our results show that the proposed log-linear model with contrastive divergence scales to large vocabularies and outperforms the existing generative decipherment models by exploiting the orthographic features.
no code implementations • CL 2016 • Shay B. Cohen, Daniel Gildea
Our result provides another proof for the best known result for parsing mildly context sensitive formalisms such as combinatory categorial grammars, head grammars, linear indexed grammars, and tree adjoining grammars, which can be parsed in time $O(n^{4. 76})$.
1 code implementation • 14 Apr 2015 • Iftekhar Naim, M. Iftekhar Tanveer, Daniel Gildea, Mohammed, Hoque
We present a computational framework for automatically quantifying verbal and nonverbal behaviors in the context of job interviews.
no code implementations • 25 Nov 2013 • Pierluigi Crescenzi, Daniel Gildea, Andrea Marino, Gianluca Rossi, Giorgio Satta
Synchronous Context-Free Grammars (SCFGs), also known as syntax-directed translation schemata, are unlike context-free grammars in that they do not have a binary normal form.