Search Results for author: Joakim Nivre

Found 104 papers, 15 papers with code

Fine-Grained Controllable Text Generation Using Non-Residual Prompting

1 code implementation ACL 2022 Fredrik Carlsson, Joey Öhman, Fangyu Liu, Severine Verlinden, Joakim Nivre, Magnus Sahlgren

We propose a resource-efficient method for converting a pre-trained CLM into this architecture, and demonstrate its potential on various experiments, including the novel task of contextualized word inclusion.

Text Generation

Universal Dependencies for Albanian

no code implementations UDW (COLING) 2020 Marsida Toska, Joakim Nivre, Daniel Zeman

In this paper, we introduce the first Universal Dependencies (UD) treebank for standard Albanian, consisting of 60 sentences collected from the Albanian Wikipedia, annotated with lemmas, universal part-of-speech tags, morphological features and syntactic dependencies.

Continual Learning Under Language Shift

no code implementations2 Nov 2023 Evangelia Gogoulou, Timothée Lesort, Magnus Boman, Joakim Nivre

The recent increase in data and model scale for language model pre-training has led to huge training costs.

Continual Learning Language Modelling

Schrödinger's Tree -- On Syntax and Neural Language Models

no code implementations17 Oct 2021 Artur Kulmizev, Joakim Nivre

In the last half-decade, the field of natural language processing (NLP) has undergone two major transitions: the switch to neural networks as the primary modeling paradigm and the homogenization of the training regime (pre-train, then fine-tune).

Transfer Learning

Revisiting Negation in Neural Machine Translation

1 code implementation26 Jul 2021 Gongbo Tang, Philipp Rönchen, Rico Sennrich, Joakim Nivre

In this paper, we evaluate the translation of negation both automatically and manually, in English--German (EN--DE) and English--Chinese (EN--ZH).

Machine Translation Negation +2

Syntactic Nuclei in Dependency Parsing -- A Multilingual Exploration

no code implementations EACL 2021 Ali Basirat, Joakim Nivre

Standard models for syntactic dependency parsing take words to be the elementary units that enter into dependency relations.

Dependency Parsing

Attention Can Reflect Syntactic Structure (If You Let It)

no code implementations EACL 2021 Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, Joakim Nivre

Since the popularization of the Transformer as a general-purpose feature encoder for NLP, many studies have attempted to decode linguistic structure from its novel multi-head attention mechanism.

Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English

no code implementations COLING 2020 Gongbo Tang, Rico Sennrich, Joakim Nivre

The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information.

Machine Translation NMT +1

Principal Word Vectors

no code implementations9 Jul 2020 Ali Basirat, Christian Hardmeier, Joakim Nivre

The effect of these generalizations on the word vectors is intrinsically studied with regard to the spread and the discriminability of the word vectors.

Dependency Parsing Word Similarity

Køpsala: Transition-Based Graph Parsing via Efficient Training and Effective Encoding

1 code implementation25 May 2020 Daniel Hershcovich, Miryam de Lhoneux, Artur Kulmizev, Elham Pejhan, Joakim Nivre

We present K{\o}psala, the Copenhagen-Uppsala system for the Enhanced Universal Dependencies Shared Task at IWPT 2020.

Sentence

A Tale of Three Parsers: Towards Diagnostic Evaluation for Meaning Representation Parsing

no code implementations LREC 2020 Maja Buljan, Joakim Nivre, Stephan Oepen, Lilja {\O}vrelid

We discuss methodological choices in contrastive and diagnostic evaluation in meaning representation parsing, i. e. mapping from natural language utterances to graph-based encodings of its semantic structure.

Dependency Parsing

Do Neural Language Models Show Preferences for Syntactic Formalisms?

no code implementations ACL 2020 Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, Joakim Nivre

Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces.

Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection

no code implementations LREC 2020 Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, Daniel Zeman

Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework.

Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing - A Tale of Two Parsers Revisited

no code implementations IJCNLP 2019 Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre

Transition-based and graph-based dependency parsers have previously been shown to have complementary strengths and weaknesses: transition-based parsers exploit rich structural features but suffer from error propagation, while graph-based parsers benefit from global optimization but have restricted feature scope.

Dependency Parsing Sentence +1

Encoders Help You Disambiguate Word Senses in Neural Machine Translation

no code implementations IJCNLP 2019 Gongbo Tang, Rico Sennrich, Joakim Nivre

We find that encoder hidden states outperform word embeddings significantly which indicates that encoders adequately encode relevant information for disambiguation into hidden states.

Machine Translation NMT +2

Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing -- A Tale of Two Parsers Revisited

no code implementations20 Aug 2019 Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre

Transition-based and graph-based dependency parsers have previously been shown to have complementary strengths and weaknesses: transition-based parsers exploit rich structural features but suffer from error propagation, while graph-based parsers benefit from global optimization but have restricted feature scope.

Dependency Parsing Sentence +1

Understanding Neural Machine Translation by Simplification: The Case of Encoder-free Models

no code implementations RANLP 2019 Gongbo Tang, Rico Sennrich, Joakim Nivre

In this paper, we try to understand neural machine translation (NMT) via simplifying NMT architectures and training encoder-free NMT models.

Machine Translation NMT +2

What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?

1 code implementation CL (ACL) 2020 Miryam de Lhoneux, Sara Stymne, Joakim Nivre

We find that the parser learns different information about AVCs and FMVs if only sequential models (BiLSTMs) are used in the architecture but similar information when a recursive layer is used.

Dependency Parsing Open-Ended Question Answering

Recursive Subtree Composition in LSTM-Based Dependency Parsing

1 code implementation NAACL 2019 Miryam de Lhoneux, Miguel Ballesteros, Joakim Nivre

When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information.

Dependency Parsing

Expletives in Universal Dependency Treebanks

1 code implementation WS 2018 Gosse Bouma, Jan Hajic, Dag Haug, Joakim Nivre, Per Erik Solberg, Lilja {\O}vrelid

Although treebanks annotated according to the guidelines of Universal Dependencies (UD) now exist for many languages, the goal of annotating the same phenomena in a cross-linguistically consistent fashion is not always met.

Coreference Resolution Question Answering

Enhancing Universal Dependency Treebanks: A Case Study

no code implementations WS 2018 Joakim Nivre, Paola Marongiu, Filip Ginter, Jenna Kanerva, Simonetta Montemagni, Sebastian Schuster, Maria Simi

We evaluate two cross-lingual techniques for adding enhanced dependencies to existing treebanks in Universal Dependencies.

An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation

no code implementations WS 2018 Gongbo Tang, Rico Sennrich, Joakim Nivre

Recent work has shown that the encoder-decoder attention mechanisms in neural machine translation (NMT) are different from the word alignment in statistical machine translation.

Machine Translation NMT +3

CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies

no code implementations CONLL 2018 Daniel Zeman, Jan Haji{\v{c}}, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, Slav Petrov

Every year, the Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets.

Dependency Parsing Morphological Analysis +1

An Investigation of the Interactions Between Pre-Trained Word Embeddings, Character Models and POS Tags in Dependency Parsing

no code implementations EMNLP 2018 Aaron Smith, Miryam de Lhoneux, Sara Stymne, Joakim Nivre

We provide a comprehensive analysis of the interactions between pre-trained word embeddings, character models and POS tags in a transition-based dependency parser.

Dependency Parsing POS +2

Universal Word Segmentation: Implementation and Interpretation

1 code implementation TACL 2018 Yan Shao, Christian Hardmeier, Joakim Nivre

Word segmentation is a low-level NLP task that is non-trivial for a considerable number of languages.

Segmentation

An Evaluation of Neural Machine Translation Models on Historical Spelling Normalization

1 code implementation COLING 2018 Gongbo Tang, Fabienne Cap, Eva Pettersson, Joakim Nivre

In this paper, we apply different NMT models to the problem of historical spelling normalization for five languages: English, German, Hungarian, Icelandic, and Swedish.

Machine Translation NMT +1

Parser Training with Heterogeneous Treebanks

1 code implementation ACL 2018 Sara Stymne, Miryam de Lhoneux, Aaron Smith, Joakim Nivre

How to make the most of multiple heterogeneous treebanks when training a monolingual dependency parser is an open question.

Open-Ended Question Answering

Sentences with Gapping: Parsing and Reconstructing Elided Predicates

2 code implementations NAACL 2018 Sebastian Schuster, Joakim Nivre, Christopher D. Manning

Sentences with gapping, such as Paul likes coffee and Mary tea, lack an overt predicate to indicate the relation between two or more arguments.

Natural Language Understanding Relation +1

Recall is the Proper Evaluation Metric for Word Segmentation

no code implementations IJCNLP 2017 Yan Shao, Christian Hardmeier, Joakim Nivre

We extensively analyse the correlations and drawbacks of conventionally employed evaluation metrics for word segmentation.

Information Retrieval Machine Translation +3

Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle

1 code implementation WS 2017 Miryam de Lhoneux, Sara Stymne, Joakim Nivre

In this paper, we extend the arc-hybrid system for transition-based parsing with a swap transition that enables reordering of the words and construction of non-projective trees.

Dependency Parsing

Universal Dependencies

no code implementations CL (ACL) 2021 Joakim Nivre, Daniel Zeman, Filip Ginter, Francis Tyers

Universal Dependencies (UD) is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages.

Universal Dependencies for Turkish

no code implementations COLING 2016 Umut Sulubacak, Memduh Gokirmak, Francis Tyers, {\c{C}}a{\u{g}}r{\i} {\c{C}}{\"o}ltekin, Joakim Nivre, G{\"u}l{\c{s}}en Eryi{\u{g}}it

The Universal Dependencies (UD) project was conceived after the substantial recent interest in unifying annotation schemes across languages.

Universal Dependencies: A Cross-Linguistic Perspective on Grammar and Lexicon

no code implementations WS 2016 Joakim Nivre

Universal Dependencies is an initiative to develop cross-linguistically consistent grammatical annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning and parsing research from a language typology perspective.

Position

The Universal Dependencies Treebank of Spoken Slovenian

no code implementations LREC 2016 Kaja Dobrovoljc, Joakim Nivre

This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian.

Universal Dependencies for Persian

no code implementations LREC 2016 Mojgan Seraji, Filip Ginter, Joakim Nivre

The Persian Universal Dependency Treebank (Persian UD) is a recent effort of treebanking Persian with Universal Dependencies (UD), an ongoing project that designs unified and cross-linguistically valid grammatical representations including part-of-speech tags, morphological features, and dependency relations.

Sentence valid

Static and Dynamic Feature Selection in Morphosyntactic Analyzers

no code implementations21 Mar 2016 Bernd Bohnet, Miguel Ballesteros, Ryan Mcdonald, Joakim Nivre

Experiments on five languages show that feature selection can result in more compact models as well as higher accuracy under all conditions, but also that a dynamic ordering works better than a static ordering and that joint systems benefit more than standalone taggers.

feature selection

A Persian Treebank with Stanford Typed Dependencies

no code implementations LREC 2014 Mojgan Seraji, Carina Jahani, Be{\'a}ta Megyesi, Joakim Nivre

We present the Uppsala Persian Dependency Treebank (UPDT) with a syntactic annotation scheme based on Stanford Typed Dependencies.

Cultural Vocal Bursts Intensity Prediction Sentence

Universal Stanford dependencies: A cross-linguistic typology

no code implementations LREC 2014 Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, Christopher D. Manning

Revisiting the now de facto standard Stanford dependency representation, we propose an improved taxonomy to capture grammatical relations across languages, including morphologically rich ones.

Training Deterministic Parsers with Non-Deterministic Oracles

no code implementations TACL 2013 Yoav Goldberg, Joakim Nivre

This problem is aggravated by the fact that they are normally trained using oracles that are deterministic and incomplete in the sense that they assume a unique canonical path through the transition system and are only valid as long as the parser does not stray from this path.

valid

MaltOptimizer: A System for MaltParser Optimization

no code implementations LREC 2012 Miguel Ballesteros, Joakim Nivre

Freely available statistical parsers often require careful optimization to produce state-of-the-art results, which can be a non-trivial task especially for application developers who are not interested in parsing research for its own sake.

Dependency Parsing

A Basic Language Resource Kit for Persian

no code implementations LREC 2012 Mojgan Seraji, Be{\'a}ta Megyesi, Joakim Nivre

As for resources, we describe the Uppsala PErsian Corpus (UPEC) which is a modified version of the Bijankhan corpus with additional sentence segmentation and consistent tokenization modified for more appropriate syntactic annotation.

Part-Of-Speech Tagging POS +2

Cannot find the paper you are looking for? You can Submit a new open access paper.