Search Results for author: Chris Quirk

Found 38 papers, 3 papers with code

Creating generalizable downstream graph models with random projections

no code implementations17 Feb 2023 Anton Amirov, Chris Quirk, Jennifer Neville

We investigate graph representation learning approaches that enable models to generalize across graphs: given a model trained using the representations from one graph, our goal is to apply inference using those same model parameters when given representations computed over a new graph, unseen during model training, with minimal degradation in inference accuracy.

Computational Efficiency Graph Representation Learning

Probing Factually Grounded Content Transfer with Factual Ablation

no code implementations Findings (ACL) 2022 Peter West, Chris Quirk, Michel Galley, Yejin Choi

Particularly, this domain allows us to introduce the notion of factual ablation for automatically measuring factual consistency: this captures the intuition that the model should be less likely to produce an output given a less relevant grounding document.

When does text prediction benefit from additional context? An exploration of contextual signals for chat and email messages

no code implementations NAACL 2021 Stojan Trajanovski, Chad Atalla, Kunho Kim, Vipul Agarwal, Milad Shokouhi, Chris Quirk

We compare contextual text prediction in chat and email messages from two of the largest commercial platforms Microsoft Teams and Outlook, finding that contextual signals contribute to performance differently between these scenarios.

Text Editing by Command

no code implementations NAACL 2021 Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, Bill Dolan

A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.

Sentence Text Generation

Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

no code implementations WS 2020 Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, Chris Quirk

In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models.

Language Modelling

A Controllable Model of Grounded Response Generation

1 code implementation1 May 2020 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill Dolan

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses.

Informativeness Response Generation

Novel positional encodings to enable tree-based transformers

1 code implementation NeurIPS 2019 Vighnesh Shiv, Chris Quirk

Neural models optimized for tree-based problems are of great value in tasks like SQL query extraction and program synthesis.

Program Synthesis Semantic Parsing +1

Multilingual Whispers: Generating Paraphrases with Translation

no code implementations WS 2019 Christian Federmann, Oussama Elachqar, Chris Quirk

Naturally occurring paraphrase data, such as multiple news stories about the same event, is a useful but rare resource.

Machine Translation Translation

Towards Content Transfer through Grounded Text Generation

no code implementations NAACL 2019 Shrimai Prabhumoye, Chris Quirk, Michel Galley

Recent work in neural generation has attracted significant interest in controlling the form of text, such as style, persona, and politeness.

Sentence Text Generation

Assigning people to tasks identified in email: The EPA dataset for addressee tagging for detected task intent

no code implementations WS 2018 Revanth Rameshkumar, Peter Bailey, Abhishek Jha, Chris Quirk

We describe the Enron People Assignment (EPA) dataset, in which tasks that are described in emails are associated with the person(s) responsible for carrying out these tasks.

Novel positional encodings to enable tree-structured transformers

no code implementations27 Sep 2018 Vighnesh Leonardo Shiv, Chris Quirk

With interest in program synthesis and similarly flavored problems rapidly increasing, neural models optimized for tree-domain problems are of great value.

Program Synthesis Semantic Parsing +1

Confidence Modeling for Neural Semantic Parsing

1 code implementation ACL 2018 Li Dong, Chris Quirk, Mirella Lapata

In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models.

Semantic Parsing

NLP for Precision Medicine

no code implementations ACL 2017 Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-tau Yih

We will introduce precision medicine and showcase the vast opportunities for NLP in this burgeoning field with great societal impact.

Decision Making Entity Linking +2

Distant Supervision for Relation Extraction beyond the Sentence Boundary

no code implementations EACL 2017 Chris Quirk, Hoifung Poon

At the core of our approach is a graph representation that can incorporate both standard dependencies and discourse relations, thus providing a unifying way to model relations within and across sentences.

Relation Relation Extraction +1

deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets

no code implementations IJCNLP 2015 Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, Bill Dolan

We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.