no code implementations • SEMEVAL 2021 • Abdullatif K{\"o}ksal, Yusuf Y{\"u}ksel, Bekir Y{\i}ld{\i}r{\i}m, Arzucan {\"O}zg{\"u}r
We observe that joint learning improves the F1 scores on the SemTabFacts and TabFact test sets by 3. 31{\%} and 0. 77{\%}, respectively.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Abdullatif K{\"o}ksal, Arzucan {\"O}zg{\"u}r
Relation classification is one of the key topics in information extraction, which can be used to construct knowledge bases or to provide useful information for question answering.
no code implementations • LREC 2020 • Berfu B{\"u}y{\"u}k{\"o}z, Ali H{\"u}rriyeto{\u{g}}lu, Arzucan {\"O}zg{\"u}r
This study evaluates the robustness of two state-of-the-art deep contextual language representations, ELMo and DistilBERT, on supervised learning of binary protest news classification (PC) and sentiment analysis (SA) of product reviews.
no code implementations • WS 2019 • {\.I}lknur Karadeniz, {\"O}mer Faruk Tuna, Arzucan {\"O}zg{\"u}r
Our participation includes two systems for the two subtasks of the Bacteria Biotope Task: the normalization of entities (BB-norm) and the identification of the relations between the entities given a biomedical text (BB-rel).
Ranked #3 on Medical Concept Normalization on BB-norm-phenotype
no code implementations • RANLP 2019 • At{\i}f Emre Y{\"u}ksel, Ya{\c{s}}ar Alim T{\"u}rkmen, Arzucan {\"O}zg{\"u}r, Berna Alt{\i}nel
Short-text classification is a challenging task, due to the sparsity and high dimensionality of the feature space.
no code implementations • WS 2019 • Utku T{\"u}rk, Furkan Atmaca, {\c{S}}aziye Bet{\"u}l {\"O}zate{\c{s}}, Abdullatif K{\"o}ksal, Balkiz Ozturk Basaran, Tunga Gungor, Arzucan {\"O}zg{\"u}r
In addition to the treebanks, we have also constructed a custom annotation software with advanced filtering and morphological editing options.
1 code implementation • CONLL 2018 • {\c{S}}aziye Bet{\"u}l {\"O}zate{\c{s}}, Arzucan {\"O}zg{\"u}r, Tunga G{\"u}ng{\"o}r, Balk{\i}z {\"O}zt{\"u}rk
We propose two word representation models for agglutinative languages that better capture the similarities between words which have similar tasks in sentences.
no code implementations • LREC 2016 • {\c{S}}aziye Bet{\"u}l {\"O}zate{\c{s}}, Arzucan {\"O}zg{\"u}r, Dragomir Radev
We introduce an approach based on using the dependency grammar representations of sentences to compute sentence similarity for extractive multi-document summarization.
no code implementations • LREC 2016 • Arda {\c{C}}elebi, Arzucan {\"O}zg{\"u}r
Our approach is unsupervised in the sense that instead of using manually segmented hashtags for training the machine learning classifiers, we automatically create our training data by using tweets as well as by automatically extracting hashtag segmentations from a large corpus.
no code implementations • LREC 2014 • Arda {\c{C}}elebi, Arzucan {\"O}zg{\"u}r
In this study, we tackle the problem of self-training a feature-rich discriminative constituency parser.