Search Results for author: Deniz Yuret

Found 23 papers, 8 papers with code

Language Controls More Than Top-Down Attention: Modulating Bottom-Up Visual Processing with Referring Expressions

no code implementations1 Jan 2021 Ozan Arkan Can, Ilker Kesen, Deniz Yuret

How to best integrate linguistic and perceptual processing in multimodal tasks is an important open problem.

KUISAIL at SemEval-2020 Task 12: BERT-CNN for Offensive Speech Identification in Social Media

no code implementations SEMEVAL 2020 Ali Safaya, Moutasem Abdullatif, Deniz Yuret

In this paper, we describe our approach to utilize pre-trained BERT models with Convolutional Neural Networks for sub-task A of the Multilingual Offensive Language Identification shared task (OffensEval 2020), which is a part of the SemEval 2020.

Abuse Detection Language Identification

KU\_ai at MEDIQA 2019: Domain-specific Pre-training and Transfer Learning for Medical NLI

no code implementations WS 2019 Cemil Cengiz, Ula{\c{s}} Sert, Deniz Yuret

In this paper, we describe our system and results submitted for the Natural Language Inference (NLI) track of the MEDIQA 2019 Shared Task.

De-identification Language Modelling +2

Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations

no code implementations WS 2019 Ozan Arkan Can, Pedro Zuidberg Dos Martires, Andreas Persson, Julian Gaal, Amy Loutfi, Luc De Raedt, Deniz Yuret, Alessandro Saffiotti

Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot's world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human.

Human robot interaction

SParse: Ko\cc University Graph-Based Parsing System for the CoNLL 2018 Shared Task

no code implementations CONLL 2018 Berkay {\"O}nder, Can G{\"u}meli, Deniz Yuret

We present SParse, our Graph-Based Parsing model submitted for the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies (Zeman et al., 2018).

Dependency Parsing Language Modelling +2

A new dataset and model for learning to understand navigational instructions

1 code implementation21 May 2018 Ozan Arkan Can, Deniz Yuret

Our goal is to develop a model that can learn to follow new instructions given prior instruction-perception-action examples.

Grounded language learning

Morphological analysis using a sequence decoder

2 code implementations TACL 2019 Ekin Akyürek, Erenay Dayanik, Deniz Yuret

Our Morse implementation and the TrMor2018 dataset are available online to support future research\footnote{See \url{https://github. com/ai-ku/Morse. jl} for a Morse implementation in Julia/Knet \cite{knet2016mlsys} and \url{https://github. com/ai-ku/TrMor2018} for the new Turkish dataset.

Morphological Analysis Transfer Learning

Parsing with Context Embeddings

no code implementations CONLL 2017 {\"O}mer K{\i}rnap, Berkay Furkan {\"O}nder, Deniz Yuret

We introduce context embeddings, dense vectors derived from a language model that represent the left/right context of a word instance, and demonstrate that context embeddings significantly improve the accuracy of our transition based parser.

Language Modelling Word Embeddings +1

Transfer Learning for Low-Resource Neural Machine Translation

1 code implementation EMNLP 2016 Barret Zoph, Deniz Yuret, Jonathan May, Kevin Knight

Ensembling and unknown word replacement add another 2 Bleu which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair.

Low-Resource Neural Machine Translation Transfer Learning

Substitute Based SCODE Word Embeddings in Supervised NLP Tasks

1 code implementation25 Jul 2014 Volkan Cirik, Deniz Yuret

The results show that the proposed method achieves as good as or better results compared to the other word embeddings in the tasks we investigate.

Chunking Dependency Parsing +3

FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely Lexical Substitutes Based on an N-gram Language Model

1 code implementation24 May 2012 Deniz Yuret

Lexical substitutes have found use in areas such as paraphrasing, text simplification, machine translation, word sense disambiguation, and part of speech induction.

Language Modelling Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.