1 code implementation • NAACL (DaSH) 2021 • Rebecca Iglesias-Flores, Megha Mishra, Ajay Patel, Akanksha Malhotra, Reno Kriz, Martha Palmer, Chris Callison-Burch
Acquiring training data for natural language processing systems can be expensive and time-consuming.
1 code implementation • 16 Feb 2024 • Ajay Patel, Colin Raffel, Chris Callison-Burch
The rapid rise to prominence of these models and these unique challenges has had immediate adverse impacts on open science and on the reproducibility of work that uses them.
1 code implementation • 29 Aug 2023 • Zachary Horvitz, Ajay Patel, Chris Callison-Burch, Zhou Yu, Kathleen McKeown
Our parameter-efficient approach, ParaGuide, leverages paraphrase-conditioned diffusion models alongside gradient-based guidance from both off-the-shelf classifiers and strong existing style embedders to transform the style of text while preserving semantic information.
no code implementations • 22 May 2023 • Ajay Patel, Delip Rao, Ansh Kothary, Kathleen McKeown, Chris Callison-Burch
Style representation learning builds content-independent representations of author style in text.
no code implementations • 13 May 2023 • Luke Friedman, Sameer Ahuja, David Allen, Zhenning Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, Brian Chu, Zexi Chen, Manoj Tiwari
A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue.
no code implementations • 18 Dec 2022 • Ajay Patel, Nicholas Andrews, Chris Callison-Burch
Existing unsupervised approaches like STRAP have largely focused on style transfer to target authors with many examples of their writing style in books, speeches, or other published works.
no code implementations • 29 Sep 2022 • Ajay Patel, Bryan Li, Mohammad Sadegh Rasooli, Noah Constant, Colin Raffel, Chris Callison-Burch
An arbitrary task can be reformulated as a natural language prompt, and a language model can be asked to generate the completion, indirectly performing the task in a paradigm known as prompt-based learning.
1 code implementation • 6 Sep 2022 • Bryan Li, Mohammad Sadegh Rasooli, Ajay Patel, Chris Callison-Burch
We propose a two-stage approach for training a single NMT model to translate unseen languages both to and from English.
1 code implementation • EMNLP 2018 • Ajay Patel, Alexander Sands, Chris Callison-Burch, Marianna Apidianaki
Vector space embedding models like word2vec, GloVe, fastText, and ELMo are extremely popular representations in natural language processing (NLP) applications.