no code implementations • ACL 2022 • Alex Murphy, Bernd Bohnet, Ryan Mcdonald, Uta Noppeney
This work explores techniques to predict Part-of-Speech (PoS) tags from neural signals measured at millisecond resolution with electroencephalography (EEG) during text reading.
no code implementations • 16 Nov 2023 • Ramya Ramakrishnan, Ethan Elenberg, Hashan Narangodage, Ryan Mcdonald
In task-oriented dialogue, a system often needs to follow a sequence of actions, called a workflow, that complies with a set of guidelines in order to complete a task.
no code implementations • 5 Oct 2023 • Paloma Sodhi, S. R. K. Branavan, Ryan Mcdonald
Large language models (LLMs) have demonstrated remarkable capabilities in performing a range of instruction following tasks in few and zero-shot settings.
1 code implementation • 23 Jul 2023 • Paloma Sodhi, Felix Wu, Ethan R. Elenberg, Kilian Q. Weinberger, Ryan Mcdonald
A common training technique for language models is teacher forcing (TF).
1 code implementation • NAACL 2022 • Ramya Ramakrishnan, Hashan Buddhika Narangodage, Mauro Schilman, Kilian Q. Weinberger, Ryan Mcdonald
This setting requires a model to not only consider the generation of these control words in the immediate context, but also produce utterances that will encourage the generation of the words at some time in the (possibly distant) future.
1 code implementation • 2 May 2022 • Felix Wu, Kwangyoun Kim, Shinji Watanabe, Kyu Han, Ryan Mcdonald, Kilian Q. Weinberger, Yoav Artzi
We introduce Wav2Seq, the first self-supervised approach to pre-train both parts of encoder-decoder models for speech data.
Ranked #3 on
Named Entity Recognition (NER)
on SLUE
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+6
no code implementations • ACL 2021 • Rami Aly, Andreas Vlachos, Ryan Mcdonald
We address the zero-shot NERC specific challenge that the not-an-entity class is not well defined as different entity classes are considered in training and testing.
no code implementations • ACL 2021 • Rahul Aralikatte, Shashi Narayan, Joshua Maynez, Sascha Rothe, Ryan Mcdonald
Professional summaries are written with document-level information, such as the theme of the document, in mind.
no code implementations • 15 Apr 2021 • Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Vitaly Nikolaev, Ryan Mcdonald
Moreover, we demonstrate empirically that planning with entity chains provides a mechanism to control hallucinations in abstractive summaries.
1 code implementation • EMNLP 2020 • Shashi Narayan, Joshua Maynez, Jakub Adamek, Daniele Pighin, Blaž Bratanič, Ryan Mcdonald
We propose encoder-centric stepwise models for extractive summarization using structured transformers -- HiBERT and Extended Transformers.
no code implementations • 1 Oct 2020 • Michael Bendersky, Honglei Zhuang, Ji Ma, Shuguang Han, Keith Hall, Ryan Mcdonald
In this paper, we report the results of our participation in the TREC-COVID challenge.
1 code implementation • WS 2020 • Petros Stavropoulos, Dimitris Pappas, Ion Androutsopoulos, Ryan Mcdonald
Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better.
2 code implementations • ACL 2020 • Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald
It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation.
no code implementations • EACL 2021 • Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, Ryan Mcdonald
The question generation system is trained on general domain data, but is applied to documents in the targeted domain.
no code implementations • 23 Apr 2020 • Shashi Narayan, Gonçalo Simoes, Ji Ma, Hannah Craighead, Ryan Mcdonald
Recent trends in natural language processing using pretraining have shifted focus towards pretraining and fine-tuning approaches for text generation.
no code implementations • 12 Sep 2019 • Stefan Hosein, Daniel Andor, Ryan Mcdonald
The core of our systems are based on BERT QA models, specifically the model of \cite{alberti2019bert}.
1 code implementation • WS 2019 • Sotiris Kotitsas, Dimitris Pappas, Ion Androutsopoulos, Ryan Mcdonald, Marianna Apidianaki
Many existing NE methods rely only on network structure, overlooking other information associated with the nodes, e. g., text describing the nodes.
1 code implementation • WS 2018 • Georgios-Ioannis Brokos, Polyvios Liosis, Ryan Mcdonald, Dimitris Pappas, Ion Androutsopoulos
We present AUEB's submissions to the BioASQ 6 document and snippet retrieval tasks (parts of Task 6b, Phase A).
1 code implementation • EMNLP 2018 • Ryan McDonald, Georgios-Ioannis Brokos, Ion Androutsopoulos
We explore several new models for document relevance ranking, building upon the Deep Relevance Matching Model (DRMM) of Guo et al. (2016).
Ranked #7 on
Ad-Hoc Information Retrieval
on TREC Robust04
2 code implementations • ACL 2018 • Bernd Bohnet, Ryan Mcdonald, Goncalo Simoes, Daniel Andor, Emily Pitler, Joshua Maynez
In this paper, we investigate models that use recurrent neural networks with sentence-level context for initial character and word-based representations.
Ranked #2 on
Part-Of-Speech Tagging
on Penn Treebank
1 code implementation • EMNLP 2017 • Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan Mcdonald, Slav Petrov
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models.
no code implementations • LREC 2016 • Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Haji{\v{c}}, Christopher D. Manning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, Daniel Zeman
Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments.
no code implementations • 21 Mar 2016 • Bernd Bohnet, Miguel Ballesteros, Ryan Mcdonald, Joakim Nivre
Experiments on five languages show that feature selection can result in more compact models as well as higher accuracy under all conditions, but also that a dynamic ordering works better than a static ordering and that joint systems benefit more than standalone taggers.
no code implementations • TACL 2016 • Manaal Faruqui, Ryan Mcdonald, Radu Soricut
Morpho-syntactic lexicons provide information about the morphological and syntactic roles of words in a language.
no code implementations • ACL 2013 • Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T{\"a}ckstr{\"o}m, Claudia Bedini, N{\'u}ria Bertomeu Castell{\'o}, Jungmee Lee
no code implementations • TACL 2013 • Oscar T{\"a}ckstr{\"o}m, Dipanjan Das, Slav Petrov, Ryan Mcdonald, Joakim Nivre
We consider the construction of part-of-speech taggers for resource-poor languages.
1 code implementation • LREC 2012 • Slav Petrov, Dipanjan Das, Ryan Mcdonald
To facilitate future research in unsupervised induction of syntactic structure and to standardize best-practices, we propose a tagset that consists of twelve universal part-of-speech categories.
no code implementations • NeurIPS 2009 • Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, Gideon S. Mann
Training conditional maximum entropy models on massive data requires significant time and computational resources.