Search Results for author: Gözde Gül Şahin

Found 15 papers, 12 papers with code

GECTurk: Grammatical Error Correction and Detection Dataset for Turkish

1 code implementation20 Sep 2023 Atakan Kara, Farrin Marouf Sofian, Andrew Bond, Gözde Gül Şahin

To encourage further research on Turkish GEC, we release our datasets, baseline models, and the synthetic data generation pipeline at https://github. com/GGLAB-KU/gecturk.

Grammatical Error Detection Machine Translation +2

Benchmarking Procedural Language Understanding for Low-Resource Languages: A Case Study on Turkish

1 code implementation13 Sep 2023 Arda Uzunoğlu, Gözde Gül Şahin

To tackle these tasks, we implement strong baseline models via fine-tuning large language-specific models such as TR-BART and BERTurk, as well as multilingual models such as mBART, mT5, and XLM.

Benchmarking Translation

Metric-Based In-context Learning: A Case Study in Text Simplification

1 code implementation27 Jul 2023 Subha Vadlamannati, Gözde Gül Şahin

However, determining the best method to select examples for ICL is nontrivial as the results can vary greatly depending on the quality, quantity, and order of examples used.

Text Simplification

Transformers on Multilingual Clause-Level Morphology

1 code implementation3 Nov 2022 Emre Can Acikgoz, Tilek Chubakov, Müge Kural, Gözde Gül Şahin, Deniz Yuret

While transformer architectures with data augmentation achieved the most promising results for inflection and reinflection tasks, prefix-tuning on mGPT received the highest results for the analysis task.

Data Augmentation Language Modelling +3

UKP-SQUARE: An Online Platform for Question Answering Research

1 code implementation ACL 2022 Tim Baumgärtner, Kexin Wang, Rachneet Sachdeva, Max Eichler, Gregor Geigle, Clifton Poth, Hannah Sterz, Haritz Puerto, Leonardo F. R. Ribeiro, Jonas Pfeiffer, Nils Reimers, Gözde Gül Şahin, Iryna Gurevych

Recent advances in NLP and information retrieval have given rise to a diverse set of question answering tasks that are of different formats (e. g., extractive, abstractive), require different model architectures (e. g., generative, discriminative), and setups (e. g., with or without retrieval).

Explainable Models Information Retrieval +2

MetaQA: Combining Expert Agents for Multi-Skill Question Answering

1 code implementation3 Dec 2021 Haritz Puerto, Gözde Gül Şahin, Iryna Gurevych

The recent explosion of question answering (QA) datasets and models has increased the interest in the generalization of models across multiple domains and formats by either training on multiple datasets or by combining multiple models.

Question Answering

To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP

no code implementations CL (ACL) 2022 Gözde Gül Şahin

Although NLP has recently witnessed a load of textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks.

Dependency Parsing Part-Of-Speech Tagging +2

PuzzLing Machines: A Challenge on Learning From Small Data

no code implementations ACL 2020 Gözde Gül Şahin, Yova Kementchedjhieva, Phillip Rust, Iryna Gurevych

To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students.

Small Data Image Classification

Two Birds with One Stone: Investigating Invertible Neural Networks for Inverse Problems in Morphology

no code implementations11 Dec 2019 Gözde Gül Şahin, Iryna Gurevych

We show that they are able to recover the morphological input parameters, i. e., predicting the lemma (e. g., cat) or the morphological tags (e. g., Plural) when run in the reverse direction, without any significant performance drop in the forward direction, i. e., predicting the surface form (e. g., cats).


Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems

1 code implementation NAACL 2019 Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych

Visual modifications to text are often used to obfuscate offensive comments in social media (e. g., "! d10t") or as a writing style ("1337" in "leet speak"), among other scenarios.

Adversarial Attack

Character-Level Models versus Morphology in Semantic Role Labeling

1 code implementation ACL 2018 Gözde Gül Şahin, Mark Steedman

Character-level models have become a popular approach specially for their accessibility and ability to handle unseen data.

Semantic Role Labeling

Cannot find the paper you are looking for? You can Submit a new open access paper.