1 code implementation • 23 Oct 2023 • Tenghao Huang, Ehsan Qasemi, Bangzheng Li, He Wang, Faeze Brahman, Muhao Chen, Snigdha Chaturvedi
Storytelling's captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies.
1 code implementation • 7 Jun 2023 • Nikhil Kandpal, Brian Lester, Mohammed Muqeeth, Anisha Mascarenhas, Monty Evans, Vishal Baskaran, Tenghao Huang, Haokun Liu, Colin Raffel
Currently, most machine learning models are trained by centralized teams and are rarely updated.
1 code implementation • Findings (NAACL) 2022 • Chao Zhao, Faeze Brahman, Tenghao Huang, Snigdha Chaturvedi
In particular, we hypothesize that the order of the input concepts can affect the PTM's ability to utilize its commonsense knowledge.
2 code implementations • 11 May 2022 • Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin Raffel
ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.
Ranked #1 on
Few-Shot Text Classification
on RAFT
1 code implementation • Findings (ACL) 2022 • Chao Zhao, Tenghao Huang, Somnath Basu Roy Chowdhury, Muthu Kumar Chandrasekaran, Kathleen McKeown, Snigdha Chaturvedi
A common method for extractive multi-document news summarization is to re-formulate it as a single-document summarization problem by concatenating all documents as a single meta-document.
1 code implementation • Findings (EMNLP) 2021 • Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi
Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation.