Search Results for author: Tenghao Huang

Found 12 papers, 10 papers with code

R2D2: Remembering, Reflecting and Dynamic Decision Making for Web Agents

no code implementations21 Jan 2025 Tenghao Huang, Kinjal Basu, Ibrahim Abdelaziz, Pavan Kapanipathi, Jonathan May, Muhao Chen

The proliferation of web agents necessitates advanced navigation and interaction strategies within complex web environments.

Decision Making

FoodPuzzle: Developing Large Language Model Agents as Flavor Scientists

no code implementations19 Sep 2024 Tenghao Huang, Donghee Lee, John Sweeney, Jiatong Shi, Emily Steliotes, Matthew Lange, Jonathan May, Muhao Chen

Flavor development in the food industry is increasingly challenged by the need for rapid innovation and precise flavor profile creation.

In-Context Learning Language Modeling +3

Familiarity-Aware Evidence Compression for Retrieval-Augmented Generation

1 code implementation19 Sep 2024 Dongwon Jung, Qin Liu, Tenghao Huang, Ben Zhou, Muhao Chen

We propose FaviComp (Familarity-Aware Evidence Compression), a novel training-free evidence compression technique that makes retrieved evidence more familiar to the target model, while seamlessly integrating parametric knowledge from the model.

RAG Retrieval

Are Large Language Models Capable of Generating Human-Level Narratives?

1 code implementation18 Jul 2024 Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng

This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.

Diversity

Red Teaming Language Models for Processing Contradictory Dialogues

1 code implementation16 May 2024 Xiaofei Wen, Bangzheng Li, Tenghao Huang, Muhao Chen

To mitigate this issue, this study explores a novel contradictory dialogue processing task that aims to detect and modify contradictory statements in a conversation.

Red Teaming valid

Planning and Editing What You Retrieve for Enhanced Tool Learning

1 code implementation30 Mar 2024 Tenghao Huang, Dongwon Jung, Muhao Chen

Recent advancements in integrating external tools with Large Language Models (LLMs) have opened new frontiers, with applications in mathematical reasoning, code generators, and smart assistants.

Mathematical Reasoning Retrieval

Affective and Dynamic Beam Search for Story Generation

1 code implementation23 Oct 2023 Tenghao Huang, Ehsan Qasemi, Bangzheng Li, He Wang, Faeze Brahman, Muhao Chen, Snigdha Chaturvedi

Storytelling's captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies.

Sentence Story Generation

Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach

1 code implementation Findings (NAACL) 2022 Chao Zhao, Faeze Brahman, Tenghao Huang, Snigdha Chaturvedi

In particular, we hypothesize that the order of the input concepts can affect the PTM's ability to utilize its commonsense knowledge.

Sentence Text Generation

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

2 code implementations11 May 2022 Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin Raffel

ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.

Few-Shot Text Classification In-Context Learning +1

Read Top News First: A Document Reordering Approach for Multi-Document News Summarization

1 code implementation Findings (ACL) 2022 Chao Zhao, Tenghao Huang, Somnath Basu Roy Chowdhury, Muthu Kumar Chandrasekaran, Kathleen McKeown, Snigdha Chaturvedi

A common method for extractive multi-document news summarization is to re-formulate it as a single-document summarization problem by concatenating all documents as a single meta-document.

Document Summarization News Summarization

Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

1 code implementation Findings (EMNLP) 2021 Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi

Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.

Cannot find the paper you are looking for? You can Submit a new open access paper.