no code implementations • 3 Apr 2024 • Ashima Suvarna, Harshita Khandelwal, Nanyun Peng
To this end, we present PhonologyBench, a novel benchmark consisting of three diagnostic tasks designed to explicitly test the phonological skills of LLMs in English: grapheme-to-phoneme conversion, syllable counting, and rhyme word generation.
no code implementations • 1 Apr 2024 • Yixin Wan, Arjun Subramonian, Anaelia Ovalle, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, Kai-Wei Chang
In this survey, we review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
1 code implementation • 31 Mar 2024 • Hritik Bansal, Ashima Suvarna, Gantavya Bhatt, Nanyun Peng, Kai-Wei Chang, Aditya Grover
A common technique for aligning large language models (LLMs) relies on acquiring human preferences by comparing multiple generations conditioned on a fixed context.
no code implementations • 5 Mar 2024 • Zefan Cai, Po-Nien Kung, Ashima Suvarna, Mingyu Derek Ma, Hritik Bansal, Baobao Chang, P. Jeffrey Brantingham, Wei Wang, Nanyun Peng
We hypothesize that a diverse set of event types and definitions are the key for models to learn to follow event definitions while existing event extraction datasets focus on annotating many high-quality examples for a few event types.
no code implementations • ACL 2020 • Ashima Suvarna, Grusha Bhalla
The recent surge in online forums and movements supporting sexual assault survivors has led to the emergence of a {`}virtual bubble{'} where survivors can recount their stories.
Cultural Vocal Bursts Intensity Prediction Transfer Learning
no code implementations • LREC 2020 • Jeremie Boudreau, Akankshya Patra, Ashima Suvarna, Paul Cook
In this paper we consider a range of n-gram and RNN language models for Mi{'}kmaq.
no code implementations • 25 Jul 2018 • Nishtha Madaan, Sameep Mehta, Shravika Mittal, Ashima Suvarna
The presence of gender stereotypes in many aspects of society is a well-known phenomenon.