no code implementations • 16 Jan 2024 • Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas
Given a collection of seed documents, DCT prompts LMs to generate additional text implied by these documents, reason globally about the correctness of this generated text, and finally fine-tune on text inferred to be correct.
1 code implementation • 27 Nov 2023 • Afra Feyza Akyürek, Eric Pan, Garry Kuwanto, Derry Wijaya
In this study, we broaden the scope of the editing problem to include an array of editing cases such as debiasing and rectifying reasoning errors and define an edit as any natural language expression that solicits a change in the model's outputs.
1 code implementation • 15 May 2023 • Afra Feyza Akyürek, Ekin Akyürek, Aman Madaan, Ashwin Kalyan, Peter Clark, Derry Wijaya, Niket Tandon
Despite their unprecedented success, even the largest language models make mistakes.
1 code implementation • Findings (NAACL) 2022 • Afra Feyza Akyürek, Sejin Paik, Muhammed Yusuf Kocyigit, Seda Akbiyik, Şerife Leman Runyun, Derry Wijaya
Large language models trained on a mixture of NLP tasks that are converted into a text-to-text format using prompts, can generalize into novel forms of language and handle novel tasks.
1 code implementation • NAACL (GeBNLP) 2022 • Afra Feyza Akyürek, Muhammed Yusuf Kocyigit, Sejin Paik, Derry Wijaya
Researchers have devised numerous ways to quantify social biases vested in pretrained language models.
1 code implementation • ICLR 2022 • Afra Feyza Akyürek, Ekin Akyürek, Derry Tanti Wijaya, Jacob Andreas
The key to this approach is a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.
no code implementations • 24 Mar 2021 • Garry Kuwanto, Afra Feyza Akyürek, Isidora Chara Tourni, Siyang Li, Alexander Gregory Jones, Derry Wijaya
We conduct an empirical study of neural machine translation (NMT) for truly low-resource languages, and propose a training curriculum fit for cases when both parallel training data and compute resource are lacking, reflecting the reality of most of the world's languages and the researchers working on these languages.
1 code implementation • ICLR 2021 • Ekin Akyürek, Afra Feyza Akyürek, Jacob Andreas
Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of tasks.