no code implementations • 21 Jan 2025 • Tenghao Huang, Kinjal Basu, Ibrahim Abdelaziz, Pavan Kapanipathi, Jonathan May, Muhao Chen
The proliferation of web agents necessitates advanced navigation and interaction strategies within complex web environments.
no code implementations • 19 Sep 2024 • Tenghao Huang, Donghee Lee, John Sweeney, Jiatong Shi, Emily Steliotes, Matthew Lange, Jonathan May, Muhao Chen
Flavor development in the food industry is increasingly challenged by the need for rapid innovation and precise flavor profile creation.
1 code implementation • 19 Sep 2024 • Dongwon Jung, Qin Liu, Tenghao Huang, Ben Zhou, Muhao Chen
We propose FaviComp (Familarity-Aware Evidence Compression), a novel training-free evidence compression technique that makes retrieved evidence more familiar to the target model, while seamlessly integrating parametric knowledge from the model.
1 code implementation • 18 Jul 2024 • Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
1 code implementation • 16 May 2024 • Xiaofei Wen, Bangzheng Li, Tenghao Huang, Muhao Chen
To mitigate this issue, this study explores a novel contradictory dialogue processing task that aims to detect and modify contradictory statements in a conversation.
1 code implementation • 30 Mar 2024 • Tenghao Huang, Dongwon Jung, Muhao Chen
Recent advancements in integrating external tools with Large Language Models (LLMs) have opened new frontiers, with applications in mathematical reasoning, code generators, and smart assistants.
1 code implementation • 23 Oct 2023 • Tenghao Huang, Ehsan Qasemi, Bangzheng Li, He Wang, Faeze Brahman, Muhao Chen, Snigdha Chaturvedi
Storytelling's captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies.
1 code implementation • 7 Jun 2023 • Nikhil Kandpal, Brian Lester, Mohammed Muqeeth, Anisha Mascarenhas, Monty Evans, Vishal Baskaran, Tenghao Huang, Haokun Liu, Colin Raffel
Currently, most machine learning models are trained by centralized teams and are rarely updated.
1 code implementation • Findings (NAACL) 2022 • Chao Zhao, Faeze Brahman, Tenghao Huang, Snigdha Chaturvedi
In particular, we hypothesize that the order of the input concepts can affect the PTM's ability to utilize its commonsense knowledge.
2 code implementations • 11 May 2022 • Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin Raffel
ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.
Ranked #1 on
Few-Shot Text Classification
on RAFT
1 code implementation • Findings (ACL) 2022 • Chao Zhao, Tenghao Huang, Somnath Basu Roy Chowdhury, Muthu Kumar Chandrasekaran, Kathleen McKeown, Snigdha Chaturvedi
A common method for extractive multi-document news summarization is to re-formulate it as a single-document summarization problem by concatenating all documents as a single meta-document.
1 code implementation • Findings (EMNLP) 2021 • Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi
Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.