1 code implementation • 16 Sep 2023 • Rajasekhar Reddy Mekala, Yasaman Razeghi, Sameer Singh
On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks.
no code implementations • 21 Jul 2023 • Kolby Nottingham, Yasaman Razeghi, KyungMin Kim, JB Lanier, Pierre Baldi, Roy Fox, Sameer Singh
Large language models (LLMs) are being applied as actors for sequential decision making tasks in domains such as robotics and games, utilizing their general world knowledge and planning abilities.
no code implementations • 23 Mar 2022 • Henrique Santos, Ke Shen, Alice M. Mulvehill, Yasaman Razeghi, Deborah L. McGuinness, Mayank Kejriwal
Preliminary results suggest that the benchmark is challenging even for advanced language representation models designed for discriminative CSR question answering tasks.
no code implementations • 15 Feb 2022 • Yasaman Razeghi, Robert L. Logan IV, Matt Gardner, Sameer Singh
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapolating from a few examples in few-shot settings.
3 code implementations • EMNLP 2020 • Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining.