1 code implementation • 19 Sep 2024 • Furkan Şahinuç, Thy Thy Tran, Yulia Grishina, Yufang Hou, Bei Chen, Iryna Gurevych
Building on this dataset, we propose three experimental settings that simulate real-world scenarios where TDM triples are fully defined, partially defined, or undefined during leaderboard construction.
1 code implementation • 4 Jul 2024 • Furkan Şahinuç, Ilia Kuznetsov, Yufang Hou, Iryna Gurevych
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks.
2 code implementations • 11 Oct 2022 • Cagri Toraman, Oguzhan Ozcelik, Furkan Şahinuç, Fazli Can
The rapid dissemination of misinformation through online social networks poses a pressing issue with harmful consequences jeopardizing human health, public safety, democracy, and the economy; therefore, urgent action is required to address this problem.
no code implementations • 26 Sep 2022 • Nurullah Sevim, Ege Ozan Özyedek, Furkan Şahinuç, Aykut Koç
FNet achieves competitive performances concerning the original transformer encoder model while accelerating training process by removing the computational burden of the attention mechanism.
no code implementations • 19 Apr 2022 • Cagri Toraman, Eyup Halit Yilmaz, Furkan Şahinuç, Oguzhan Ozcelik
Furthermore, we find that increasing the vocabulary size improves the performance of Morphological and Word-level tokenizers more than that of de facto tokenizers.
1 code implementation • LREC 2022 • Cagri Toraman, Furkan Şahinuç, Eyup Halit Yilmaz
The experimental results supported by statistical tests show that Transformer-based language models outperform conventional bag-of-words and neural models by at least 5% in English and 10% in Turkish for large-scale hate speech detection.
2 code implementations • 19 Jul 2018 • Lutfi Kerem Senel, Ihsan Utlu, Furkan Şahinuç, Haldun M. Ozaktas, Aykut Koç
In other words, we align words that are already determined to be related, along predefined concepts.