no code implementations • 17 Feb 2023 • Anton Amirov, Chris Quirk, Jennifer Neville
We investigate graph representation learning approaches that enable models to generalize across graphs: given a model trained using the representations from one graph, our goal is to apply inference using those same model parameters when given representations computed over a new graph, unseen during model training, with minimal degradation in inference accuracy.
no code implementations • Findings (ACL) 2022 • Peter West, Chris Quirk, Michel Galley, Yejin Choi
Particularly, this domain allows us to introduce the notion of factual ablation for automatically measuring factual consistency: this captures the intuition that the model should be less likely to produce an output given a less relevant grounding document.
no code implementations • NAACL 2021 • Stojan Trajanovski, Chad Atalla, Kunho Kim, Vipul Agarwal, Milad Shokouhi, Chris Quirk
We compare contextual text prediction in chat and email messages from two of the largest commercial platforms Microsoft Teams and Outlook, finding that contextual signals contribute to performance differently between these scenarios.
no code implementations • NAACL 2021 • Felix Faltings, Michel Galley, Gerold Hintz, Chris Brockett, Chris Quirk, Jianfeng Gao, Bill Dolan
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
no code implementations • WS 2020 • Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, Chris Quirk
In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models.
1 code implementation • 1 May 2020 • Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill Dolan
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses.
1 code implementation • NeurIPS 2019 • Vighnesh Shiv, Chris Quirk
Neural models optimized for tree-based problems are of great value in tasks like SQL query extraction and program synthesis.
no code implementations • WS 2019 • Christian Federmann, Oussama Elachqar, Chris Quirk
Naturally occurring paraphrase data, such as multiple news stories about the same event, is a useful but rare resource.
no code implementations • ACL 2019 • Vighnesh Leonardo Shiv, Chris Quirk, Anshuman Suri, Xiang Gao, Khuram Shahid, Nithya Govindarajan, Yizhe Zhang, Jianfeng Gao, Michel Galley, Chris Brockett, Tulasi Menon, Bill Dolan
The Intelligent Conversation Engine: Code and Pre-trained Systems (Microsoft Icecaps) is an upcoming open-source natural language processing repository.
no code implementations • NAACL 2019 • Shrimai Prabhumoye, Chris Quirk, Michel Galley
Recent work in neural generation has attracted significant interest in controlling the form of text, such as style, persona, and politeness.
no code implementations • WS 2018 • Revanth Rameshkumar, Peter Bailey, Abhishek Jha, Chris Quirk
We describe the Enron People Assignment (EPA) dataset, in which tasks that are described in emails are associated with the person(s) responsible for carrying out these tasks.
no code implementations • 27 Sep 2018 • Vighnesh Leonardo Shiv, Chris Quirk
With interest in program synthesis and similarly flavored problems rapidly increasing, neural models optimized for tree-domain problems are of great value.
1 code implementation • ACL 2018 • Li Dong, Chris Quirk, Mirella Lapata
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models.
no code implementations • 14 Sep 2017 • Alex Renda, Harrison Goldstein, Sarah Bird, Chris Quirk, Adrian Sampson
We propose to treat these challenges as language-design problems.
no code implementations • TACL 2017 • Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-tau Yih
Past work in relation extraction has focused on binary relations in single sentences.
no code implementations • ACL 2017 • Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-tau Yih
We will introduce precision medicine and showcase the vast opportunities for NLP in this burgeoning field with great societal impact.
no code implementations • EACL 2017 • Chris Quirk, Hoifung Poon
At the core of our approach is a graph representation that can incorporate both standard dependencies and discourse relations, thus providing a unifying way to model relations within and across sentences.
no code implementations • IJCNLP 2015 • Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, Bill Dolan
We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs.