Search Results for author: Raymond Mooney

Found 33 papers, 8 papers with code

Distilling Algorithmic Reasoning from LLMs via Explaining Solution Programs

no code implementations11 Apr 2024 Jierui Li, Raymond Mooney

More specifically, we employ an LLM to generate explanations for a set of <problem, solution-program> pairs, then use <problem, explanation> pairs to fine-tune a smaller language model, which we refer to as the Reasoner, to learn algorithmic reasoning that can generate "how-to-solve" hints for unseen problems.

Language Modelling

When is Tree Search Useful for LLM Planning? It Depends on the Discriminator

1 code implementation16 Feb 2024 Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, Huan Sun

In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method.

Mathematical Reasoning Re-Ranking +2

Sparse Meets Dense: A Hybrid Approach to Enhance Scientific Document Retrieval

no code implementations8 Jan 2024 Priyanka Mandikal, Raymond Mooney

Traditional information retrieval is based on sparse bag-of-words vector representations of documents and queries.

Information Retrieval Language Modelling +2

What is the Best Automated Metric for Text to Motion Generation?

no code implementations19 Sep 2023 Jordan Voas, Yili Wang, QiXing Huang, Raymond Mooney

Our findings indicate that none of the metrics currently used for this task show even a moderate correlation with human judgments on a sample level.

Explaining Competitive-Level Programming Solutions using LLMs

no code implementations11 Jul 2023 Jierui Li, Szymon Tworkowski, Yingying Wu, Raymond Mooney

In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation.

Code Generation Explanation Generation

Text-to-SQL Error Correction with Language Models of Code

1 code implementation22 May 2023 Ziru Chen, Shijie Chen, Michael White, Raymond Mooney, Ali Payani, Jayanth Srinivasa, Yu Su, Huan Sun

Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code.

SQL Parsing Text-To-SQL

TellMeWhy: A Dataset for Answering Why-Questions in Narratives

1 code implementation Findings (ACL) 2021 Yash Kumar Lal, Nathanael Chambers, Raymond Mooney, Niranjan Balasubramanian

They are especially worse on questions whose answers are external to the narrative, thus providing a challenge for future QA and narrative understanding research.

Captioning Images with Diverse Objects

1 code implementation CVPR 2017 Subhashini Venugopalan, Lisa Anne Hendricks, Marcus Rohrbach, Raymond Mooney, Trevor Darrell, Kate Saenko

We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets.

Object Object Recognition

Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text

3 code implementations EMNLP 2016 Subhashini Venugopalan, Lisa Anne Hendricks, Raymond Mooney, Kate Saenko

This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos.

Descriptive Language Modelling +1

Sequence to Sequence - Video to Text

no code implementations ICCV 2015 Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko

Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip.

Caption Generation Language Modelling +1

Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data

1 code implementation CVPR 2016 Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell

Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet.

Image Captioning Novel Concepts +3

Sequence to Sequence -- Video to Text

4 code implementations3 May 2015 Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko

Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip.

Caption Generation Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.