1 code implementation • 1 Jul 2024 • Yiyuan Li, Shichao Sun, PengFei Liu
Fuzzy reasoning is vital due to the frequent use of imprecise information in daily contexts.
1 code implementation • 18 Jun 2024 • Zhen Huang, Zengzhi Wang, Shijie Xia, Xuefeng Li, Haoyang Zou, Ruijie Xu, Run-Ze Fan, Lyumanshan Ye, Ethan Chern, Yixin Ye, Yikai Zhang, Yuqing Yang, Ting Wu, Binjie Wang, Shichao Sun, Yang Xiao, Yiyuan Li, Fan Zhou, Steffi Chern, Yiwei Qin, Yan Ma, Jiadi Su, Yixiu Liu, Yuxiang Zheng, Shaoting Zhang, Dahua Lin, Yu Qiao, PengFei Liu
We delve into the models' cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions.
1 code implementation • 5 Feb 2024 • Can Jin, Tong Che, Hongwu Peng, Yiyuan Li, Dimitris N. Metaxas, Marco Pavone
The student learners are trained by the main model and, in turn, provide feedback to help the main model capture more generalizable and imitable correlations.
1 code implementation • 8 Nov 2023 • Yiyuan Li, Rakesh R. Menon, Sayan Ghosh, Shashank Srivastava
Generalized quantifiers (e. g., few, most) are used to indicate the proportions predicates are satisfied (for example, some apples are red).
no code implementations • 14 Nov 2022 • Yiyuan Li, Tong Che, Yezhen Wang, Zhengbao Jiang, Caiming Xiong, Snigdha Chaturvedi
In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for factual probing in PLMs that leverages the symmetry of the task by constructing symmetrical prompts for subject and object prediction.
1 code implementation • EMNLP 2021 • Somnath Basu Roy Chowdhury, Sayan Ghosh, Yiyuan Li, Junier B. Oliva, Shashank Srivastava, Snigdha Chaturvedi
Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task.
no code implementations • 20 Oct 2020 • Yiyuan Li, Antonios Anastasopoulos, Alan W Black
In this work, we design a knowledge-base and prediction model embedded system for spelling correction in low-resource languages.
no code implementations • LREC 2020 • Graham Neubig, Shruti Rijhwani, Alexis Palmer, Jordan MacKenzie, Hilaria Cruz, Xinjian Li, Matthew Lee, Aditi Chaudhary, Luke Gessler, Steven Abney, Shirley Anugrah Hayati, Antonios Anastasopoulos, Olga Zamaraeva, Emily Prud'hommeaux, Jennette Child, Sara Child, Rebecca Knowles, Sarah Moeller, Jeffrey Micher, Yiyuan Li, Sydney Zink, Mengzhou Xia, Roshan S Sharma, Patrick Littell
Despite recent advances in natural language processing and other language technology, the application of such technology to language documentation and conservation has been limited.
no code implementations • 10 Jan 2020 • Yiyuan Li, Antonios Anastasopoulos, Alan W. black
Current grammatical error correction (GEC) models typically consider the task as sequence generation, which requires large amounts of annotated data and limit the applications in data-limited settings.