no code implementations • 10 Feb 2023 • Marco Farina, Duccio Pappadopulo, Anant Gupta, Leslie Huang, Ozan İrsoy, Thamar Solorio
Driven by encouraging results on a wide range of tasks, the field of NLP is experiencing an accelerated race to develop bigger language models.
no code implementations • 25 Jan 2023 • Adrian Benton, Tianze Shi, Ozan İrsoy, Igor Malioutov
English news headlines form a register with unique syntactic properties that have been documented in linguistics literature since the 1930s.
no code implementations • 1 Oct 2022 • Ozan İrsoy, Ethem Alpaydin
Explainability is becoming an increasingly important topic for deep neural networks.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Duccio Pappadopulo, Lisa Bauer, Marco Farina, Ozan İrsoy, Mohit Bansal
In this paper, we apply DAG-LSTMs to the conversation disentanglement task.
1 code implementation • NAACL 2021 • Tianze Shi, Adrian Benton, Igor Malioutov, Ozan İrsoy
While the predictive performance of modern statistical dependency parsers relies heavily on the availability of expensive expert-annotated treebank data, not all annotations contribute equally to the training of the parsers.
1 code implementation • NAACL 2021 • Tianze Shi, Ozan İrsoy, Igor Malioutov, Lillian Lee
Naturally-occurring bracketings, such as answer fragments to natural language questions and hyperlinks on webpages, can reflect human syntactic intuition regarding phrasal boundaries.
1 code implementation • EMNLP (insights) 2021 • Ozan İrsoy, Adrian Benton, Karl Stratos
Mikolov et al. (2013a) observed that continuous bag-of-words (CBOW) word embeddings tend to underperform Skip-gram (SG) embeddings, and this finding has been reported in subsequent works.
1 code implementation • EMNLP 2020 • Tianze Shi, Igor Malioutov, Ozan İrsoy
We reduce the task of (span-based) PropBank-style semantic role labeling (SRL) to syntactic dependency parsing.
no code implementations • 2 Aug 2019 • Ozan İrsoy, Rakesh Gosangi, Haimin Zhang, Mu-Hsin Wei, Peter Lund, Duccio Pappadopulo, Brendan Fahy, Neophytos Nephytou, Camilo Ortiz
In this paper, we introduce a new model architecture, directed-acyclic-graph LSTM (DAG-LSTM) for DA classification.
no code implementations • 1 May 2019 • Ozan İrsoy
We empirically investigate the (negative) expected accuracy as an alternative loss function to cross entropy (negative log likelihood) for classification tasks.
no code implementations • 25 Dec 2018 • Ozan İrsoy, Ethem Alpaydin
Dropout is a very effective method in preventing overfitting and has become the go-to regularizer for multi-layer neural networks in recent years.
no code implementations • 7 Apr 2018 • Ozan İrsoy, Ethem Alpaydin
Traditionally, deep learning algorithms update the network weights whereas the network architecture is chosen manually, using a process of trial and error.
no code implementations • 20 Dec 2014 • Ozan İrsoy, Claire Cardie
We present the multiplicative recurrent neural network as a general model for compositional meaning in language, and evaluate it on the task of fine-grained sentiment analysis.
no code implementations • 19 Dec 2014 • Ozan İrsoy, Ethem Alpaydin
Recently proposed budding tree is a decision tree algorithm in which every node is part internal node and part leaf.
no code implementations • 26 Sep 2014 • Ozan İrsoy, Ethem Alpaydin
We also see that the autoencoder tree captures hierarchical representations at different granularities of the data on its different levels and the leaves capture the localities in the input space.
no code implementations • 2 Dec 2013 • Ozan İrsoy, Claire Cardie
Recently, deep architectures, such as recurrent and recursive neural networks have been successfully applied to various natural language processing tasks.