no code implementations • 26 Mar 2025 • Raj Sanjay Shah, Lei Xu, Qianchu Liu, Jon Burnsky, Drew Bertagnolli, Chaitanya Shivade
To address this gap, we collaborated with licensed therapists to design a comprehensive rubric for evaluating therapy notes across key dimensions: completeness, conciseness, and faithfulness.
no code implementations • 15 Feb 2025 • Lucas Charpentier, Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Michael Hu, Jaap Jumelet, Tal Linzen, Jing Liu, Aaron Mueller, Candace Ross, Raj Sanjay Shah, Alex Warstadt, Ethan Wilcox, Adina Williams
We also call for papers outside the competition in any relevant areas.
no code implementations • 22 Jan 2025 • Raj Sanjay Shah, Sashank Varma
Many studies have evaluated the cognitive alignment of Pre-trained Language Models (PLMs), i. e., their correspondence to adult performance across a range of cognitive domains.
no code implementations • 31 Oct 2024 • Grace Guo, Jenna Jiayi Kang, Raj Sanjay Shah, Hanspeter Pfister, Sashank Varma
Vision Language Models (VLMs) have been successful at many chart comprehension tasks that require attending to both the images of charts and their accompanying textual descriptions.
no code implementations • 1 Jul 2024 • Raj Sanjay Shah, Khushi Bhardwaj, Sashank Varma
The increasing cognitive alignment of these models has made them candidates for cognitive science theories.
1 code implementation • 24 Jun 2024 • Jiangshu Du, Yibo Wang, Wenting Zhao, Zhongfen Deng, Shuaiqi Liu, Renze Lou, Henry Peng Zou, Pranav Narayanan Venkit, Nan Zhang, Mukund Srinath, Haoran Ranran Zhang, Vipul Gupta, Yinghui Li, Tao Li, Fei Wang, Qin Liu, Tianlin Liu, Pengzhi Gao, Congying Xia, Chen Xing, Jiayang Cheng, Zhaowei Wang, Ying Su, Raj Sanjay Shah, Ruohao Guo, Jing Gu, Haoran Li, Kangda Wei, ZiHao Wang, Lu Cheng, Surangika Ranathunga, Meng Fang, Jie Fu, Fei Liu, Ruihong Huang, Eduardo Blanco, Yixin Cao, Rui Zhang, Philip S. Yu, Wenpeng Yin
This study focuses on the topic of LLMs assist NLP Researchers, particularly examining the effectiveness of LLM in assisting paper (meta-)reviewing and its recognizability.
no code implementations • 17 Jun 2024 • Harsh Nishant Lalai, Aashish Anantha Ramakrishnan, Raj Sanjay Shah, Dongwon Lee
With the rapid growth of Large Language Models (LLMs), safeguarding textual content against unauthorized use is crucial.
no code implementations • 25 May 2024 • Siddhartha K. Vemuri, Raj Sanjay Shah, Sashank Varma
How well do representations learned by ML models align with those of humans?
no code implementations • 25 May 2024 • Andrew Li, Xianle Feng, Siddhant Narang, Austin Peng, Tianle Cai, Raj Sanjay Shah, Sashank Varma
The overall goal is to evaluate whether humans and LLMs are aligned in their processing of garden-path sentences and in the lingering misinterpretations past the point of disambiguation, especially when extra-syntactic information (e. g., a comma delimiting a clause boundary) is present to guide processing.
no code implementations • 21 Mar 2024 • Alicja Chaszczewicz, Raj Sanjay Shah, Ryan Louie, Bruce A Arnow, Robert Kraut, Diyi Yang
We further design a self-improvement method on top of large language models to enhance the automatic generation of feedback.
no code implementations • 18 Jan 2024 • Atith Gandhi, Raj Sanjay Shah, Vijay Marupudi, Sashank Varma
The benefits of this environment include simplicity, rehearsal that is agnostic to both tasks and models, and the lack of a need for extra neural circuitry.
no code implementations • 8 Nov 2023 • Khushi Bhardwaj, Raj Sanjay Shah, Sashank Varma
Pre-trained Large Language Models (LLMs) have shown success in a diverse set of language inference and understanding tasks.
no code implementations • 18 May 2023 • Raj Sanjay Shah, Vijay Marupudi, Reba Koenen, Khushi Bhardwaj, Sashank Varma
This research shows the utility of understanding LLMs using behavioral benchmarks and points the way to future work on the number representations of LLMs and their cognitive plausibility.
no code implementations • 15 May 2023 • Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, Zahra Ashktorab, Casey Dugan, Werner Geyer, Diyi Yang
Millions of users come to online peer counseling platforms to seek support.
no code implementations • 9 Nov 2022 • Raj Sanjay Shah, Faye Holt, Shirley Anugrah Hayati, Aastha Agarwal, Yi-Chia Wang, Robert E. Kraut, Diyi Yang
This work provides a deeper understanding of the use of motivational interviewing techniques on peer-to-peer counselor platforms and sheds light on how to build better training programs for volunteer counselors on online platforms.
1 code implementation • 31 Oct 2022 • Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, Diyi Yang
To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain.