Search Results for author: Harsh Jhamtani

Found 31 papers, 23 papers with code

Truth-Conditional Captions for Time Series Data

1 code implementation EMNLP 2021 Harsh Jhamtani, Taylor Berg-Kirkpatrick

In this paper, we explore the task of automatically generating natural language descriptions of salient patterns in a time series, such as stock prices of a company over a week.

Time Series Time Series Analysis +1

Formulating Neural Sentence Ordering as the Asymmetric Traveling Salesman Problem

1 code implementation INLG (ACL) 2021 Vishal Keswani, Harsh Jhamtani

However, such an approach has major limitations – it cannot handle the presence of cycles in the resulting graphs and considers only the binary presence/absence of edges rather than a more granular score.

Combinatorial Optimization Sentence +2

Towards Robust Evaluation of Unlearning in LLMs via Data Transformations

1 code implementation23 Nov 2024 Abhinav Joshi, Shaswati Saha, Divyaksh Shukla, Sriram Vema, Harsh Jhamtani, Manas Gaur, Ashutosh Modi

Large Language Models (LLMs) have shown to be a great success in a wide range of applications ranging from regular NLP-based use cases to AI agents.

Machine Unlearning

Steering Large Language Models between Code Execution and Textual Reasoning

1 code implementation4 Oct 2024 Yongchao Chen, Harsh Jhamtani, Srinagesh Sharma, Chuchu Fan, Chi Wang

Textual reasoning has inherent limitations in solving tasks with challenges in math, logics, optimization, and searching, which is unlikely to be solved by simply scaling up the model and data size.

Code Generation Math +1

Learning to Retrieve Iteratively for In-Context Learning

no code implementations20 Jun 2024 Yunmo Chen, Tongfei Chen, Harsh Jhamtani, Patrick Xia, Richard Shin, Jason Eisner, Benjamin Van Durme

We introduce iterative retrieval, a novel framework that empowers retrievers to make iterative decisions through policy optimization.

Combinatorial Optimization In-Context Learning +2

Interpreting User Requests in the Context of Natural Language Standing Instructions

1 code implementation16 Nov 2023 Nikita Moghe, Patrick Xia, Jacob Andreas, Jason Eisner, Benjamin Van Durme, Harsh Jhamtani

Users of natural language interfaces, generally powered by Large Language Models (LLMs), often must repeat their preferences each time they make a similar request.

Natural Language Decomposition and Interpretation of Complex Utterances

no code implementations15 May 2023 Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Ben Van Durme

Designing natural language interfaces has historically required collecting supervised data to translate user requests into carefully designed intent representations.

Language Modeling Language Modelling

Target-Guided Dialogue Response Generation Using Commonsense and Data Augmentation

no code implementations Findings (NAACL) 2022 Prakhar Gupta, Harsh Jhamtani, Jeffrey P. Bigham

Target-guided response generation enables dialogue systems to smoothly transition a conversation from a dialogue context toward a target sentence.

Data Augmentation Response Generation +1

Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection

1 code implementation ACL 2022 Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley

In this paper, we propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model.

Informativeness Specificity

Truth-Conditional Captioning of Time Series Data

1 code implementation5 Oct 2021 Harsh Jhamtani, Taylor Berg-Kirkpatrick

In this paper, we explore the task of automatically generating natural language descriptions of salient patterns in a time series, such as stock prices of a company over a week.

Time Series Time Series Analysis +1

Unsupervised Enrichment of Persona-grounded Dialog with Background Stories

1 code implementation ACL 2021 Bodhisattwa Prasad Majumder, Taylor Berg-Kirkpatrick, Julian McAuley, Harsh Jhamtani

Humans often refer to personal narratives, life experiences, and events to make a conversation more engaging and rich.

Like hiking? You probably enjoy nature: Persona-grounded Dialog with Commonsense Expansions

1 code implementation EMNLP 2020 Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley

Existing persona-grounded dialog models often fail to capture simple implications of given persona descriptions, something which humans are able to do seamlessly.

Diversity

Learning Rhyming Constraints using Structured Adversaries

1 code implementation IJCNLP 2019 Harsh Jhamtani, Sanket Vaibhav Mehta, Jaime Carbonell, Taylor Berg-Kirkpatrick

Existing recurrent neural language models often fail to capture higher-level structure present in text: for example, rhyming patterns present in poetry.

A Sociolinguistic Study of Online Echo Chambers on Twitter

no code implementations WS 2019 Nikita Duseja, Harsh Jhamtani

Online social media platforms such as Facebook and Twitter are increasingly facing criticism for polarization of users.

Learning to Describe Differences Between Pairs of Similar Images

1 code implementation EMNLP 2018 Harsh Jhamtani, Taylor Berg-Kirkpatrick

We propose a model that captures visual salience by using a latent variable to align clusters of differing pixels with output sentences.

Sentence

SPINE: SParse Interpretable Neural Embeddings

2 code implementations23 Nov 2017 Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Eduard Hovy

We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec.

Denoising Word Embeddings

Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models

1 code implementation WS 2017 Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg

Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.

Shakespearizing Modern Language Using Copy-Enriched Sequence-to-Sequence Models

2 code implementations4 Jul 2017 Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg

Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.

Generating Appealing Brand Names

no code implementations28 Jun 2017 Gaurush Hiranandani, Pranav Maneriker, Harsh Jhamtani

Providing appealing brand names to newly launched products, newly formed companies or for renaming existing companies is highly important as it can play a crucial role in deciding its success or failure.

Cannot find the paper you are looking for? You can Submit a new open access paper.