Search Results for author: Afra Feyza Akyürek

Found 9 papers, 7 papers with code

Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability

no code implementations16 Jan 2024 Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas

Given a collection of seed documents, DCT prompts LMs to generate additional text implied by these documents, reason globally about the correctness of this generated text, and finally fine-tune on text inferred to be correct.

Fact Verification Text Generation

DUnE: Dataset for Unified Editing

1 code implementation27 Nov 2023 Afra Feyza Akyürek, Eric Pan, Garry Kuwanto, Derry Wijaya

In this study, we broaden the scope of the editing problem to include an array of editing cases such as debiasing and rectifying reasoning errors and define an edit as any natural language expression that solicits a change in the model's outputs.

Language Modelling Model Editing

On Measuring Social Biases in Prompt-Based Multi-Task Learning

1 code implementation Findings (NAACL) 2022 Afra Feyza Akyürek, Sejin Paik, Muhammed Yusuf Kocyigit, Seda Akbiyik, Şerife Leman Runyun, Derry Wijaya

Large language models trained on a mixture of NLP tasks that are converted into a text-to-text format using prompts, can generalize into novel forms of language and handle novel tasks.

Language Modelling Multi-Task Learning +3

Subspace Regularizers for Few-Shot Class Incremental Learning

1 code implementation ICLR 2022 Afra Feyza Akyürek, Ekin Akyürek, Derry Tanti Wijaya, Jacob Andreas

The key to this approach is a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.

Few-Shot Class-Incremental Learning Image Classification +2

Low-Resource Machine Translation Training Curriculum Fit for Low-Resource Languages

no code implementations24 Mar 2021 Garry Kuwanto, Afra Feyza Akyürek, Isidora Chara Tourni, Siyang Li, Alexander Gregory Jones, Derry Wijaya

We conduct an empirical study of neural machine translation (NMT) for truly low-resource languages, and propose a training curriculum fit for cases when both parallel training data and compute resource are lacking, reflecting the reality of most of the world's languages and the researchers working on these languages.

Cross-Lingual Bitext Mining Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.