Search Results for author: Ajay Patel

Found 9 papers, 5 papers with code

DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM Workflows

1 code implementation16 Feb 2024 Ajay Patel, Colin Raffel, Chris Callison-Burch

The rapid rise to prominence of these models and these unique challenges has had immediate adverse impacts on open science and on the reproducibility of work that uses them.

Synthetic Data Generation

ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style Transfer

1 code implementation29 Aug 2023 Zachary Horvitz, Ajay Patel, Chris Callison-Burch, Zhou Yu, Kathleen McKeown

Our parameter-efficient approach, ParaGuide, leverages paraphrase-conditioned diffusion models alongside gradient-based guidance from both off-the-shelf classifiers and strong existing style embedders to transform the style of text while preserving semantic information.

Style Transfer

Leveraging Large Language Models in Conversational Recommender Systems

no code implementations13 May 2023 Luke Friedman, Sameer Ahuja, David Allen, Zhenning Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, Brian Chu, Zexi Chen, Manoj Tiwari

A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue.

Common Sense Reasoning Dialogue Management +3

Low-Resource Authorship Style Transfer: Can Non-Famous Authors Be Imitated?

no code implementations18 Dec 2022 Ajay Patel, Nicholas Andrews, Chris Callison-Burch

Existing unsupervised approaches like STRAP have largely focused on style transfer to target authors with many examples of their writing style in books, speeches, or other published works.

In-Context Learning Style Transfer

Bidirectional Language Models Are Also Few-shot Learners

no code implementations29 Sep 2022 Ajay Patel, Bryan Li, Mohammad Sadegh Rasooli, Noah Constant, Colin Raffel, Chris Callison-Burch

An arbitrary task can be reformulated as a natural language prompt, and a language model can be asked to generate the completion, indirectly performing the task in a paradigm known as prompt-based learning.

Denoising Language Modelling +4

Magnitude: A Fast, Efficient Universal Vector Embedding Utility Package

1 code implementation EMNLP 2018 Ajay Patel, Alexander Sands, Chris Callison-Burch, Marianna Apidianaki

Vector space embedding models like word2vec, GloVe, fastText, and ELMo are extremely popular representations in natural language processing (NLP) applications.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.