Search Results for author: Ryan Mcdonald

Found 35 papers, 12 papers with code

Decoding Part-of-Speech from Human EEG Signals

no code implementations ACL 2022 Alex Murphy, Bernd Bohnet, Ryan Mcdonald, Uta Noppeney

This work explores techniques to predict Part-of-Speech (PoS) tags from neural signals measured at millisecond resolution with electroencephalography (EEG) during text reading.

Data Augmentation EEG +1

Long-term Control for Dialogue Generation: Methods and Evaluation

1 code implementation15 May 2022 Ramya Ramakrishnan, Hashan Buddhika Narangodage, Mauro Schilman, Kilian Q. Weinberger, Ryan Mcdonald

This setting requires a model to not only consider the generation of these control words in the immediate context, but also produce utterances that will encourage the generation of the words at some time in the (possibly distant) future.

Dialogue Generation Response Generation

Leveraging Type Descriptions for Zero-shot Named Entity Recognition and Classification

no code implementations ACL 2021 Rami Aly, Andreas Vlachos, Ryan Mcdonald

We address the zero-shot NERC specific challenge that the not-an-entity class is not well defined as different entity classes are considered in training and testing.

Machine Reading Comprehension named-entity-recognition +3

Planning with Learned Entity Prompts for Abstractive Summarization

no code implementations15 Apr 2021 Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Vitaly Nikolaev, Ryan Mcdonald

Moreover, we demonstrate empirically that planning with entity chains provides a mechanism to control hallucinations in abstractive summaries.

Abstractive Text Summarization Text Generation

BIOMRC: A Dataset for Biomedical Machine Reading Comprehension

1 code implementation WS 2020 Petros Stavropoulos, Dimitris Pappas, Ion Androutsopoulos, Ryan Mcdonald

Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better.

Machine Reading Comprehension

On Faithfulness and Factuality in Abstractive Summarization

2 code implementations ACL 2020 Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald

It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation.

Abstractive Text Summarization Document Summarization +3

QURIOUS: Question Generation Pretraining for Text Generation

no code implementations23 Apr 2020 Shashi Narayan, Gonçalo Simoes, Ji Ma, Hannah Craighead, Ryan Mcdonald

Recent trends in natural language processing using pretraining have shifted focus towards pretraining and fine-tuning approaches for text generation.

Abstractive Text Summarization Language Modelling +3

Measuring Domain Portability and ErrorPropagation in Biomedical QA

no code implementations12 Sep 2019 Stefan Hosein, Daniel Andor, Ryan Mcdonald

The core of our systems are based on BERT QA models, specifically the model of \cite{alberti2019bert}.

Question Answering

AUEB at BioASQ 6: Document and Snippet Retrieval

1 code implementation WS 2018 Georgios-Ioannis Brokos, Polyvios Liosis, Ryan Mcdonald, Dimitris Pappas, Ion Androutsopoulos

We present AUEB's submissions to the BioASQ 6 document and snippet retrieval tasks (parts of Task 6b, Phase A).

Deep Relevance Ranking Using Enhanced Document-Query Interactions

1 code implementation EMNLP 2018 Ryan McDonald, Georgios-Ioannis Brokos, Ion Androutsopoulos

We explore several new models for document relevance ranking, building upon the Deep Relevance Matching Model (DRMM) of Guo et al. (2016).

Ad-Hoc Information Retrieval Question Answering

Natural Language Processing with Small Feed-Forward Networks

1 code implementation EMNLP 2017 Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan Mcdonald, Slav Petrov

We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models.

Natural Language Processing

Static and Dynamic Feature Selection in Morphosyntactic Analyzers

no code implementations21 Mar 2016 Bernd Bohnet, Miguel Ballesteros, Ryan Mcdonald, Joakim Nivre

Experiments on five languages show that feature selection can result in more compact models as well as higher accuracy under all conditions, but also that a dynamic ordering works better than a static ordering and that joint systems benefit more than standalone taggers.

feature selection

A Universal Part-of-Speech Tagset

1 code implementation LREC 2012 Slav Petrov, Dipanjan Das, Ryan Mcdonald

To facilitate future research in unsupervised induction of syntactic structure and to standardize best-practices, we propose a tagset that consists of twelve universal part-of-speech categories.

Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models

no code implementations NeurIPS 2009 Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, Gideon S. Mann

Training conditional maximum entropy models on massive data requires significant time and computational resources.

Cannot find the paper you are looking for? You can Submit a new open access paper.