Don't Throw Those Morphological Analyzers Away Just Yet: Neural Morphological Disambiguation for Arabic

EMNLP 2017  ·  Nasser Zalmout, Nizar Habash ·

This paper presents a model for Arabic morphological disambiguation based on Recurrent Neural Networks (RNN). We train Long Short-Term Memory (LSTM) cells in several configurations and embedding levels to model the various morphological features. Our experiments show that these models outperform state-of-the-art systems without explicit use of feature engineering. However, adding learning features from a morphological analyzer to model the space of possible analyses provides additional improvement. We make use of the resulting morphological models for scoring and ranking the analyses of the morphological analyzer for morphological disambiguation. The results show significant gains in accuracy across several evaluation metrics. Our system results in 4.4{\%} absolute increase over the state-of-the-art in full morphological analysis accuracy (30.6{\%} relative error reduction), and 10.6{\%} (31.5{\%} relative error reduction) for out-of-vocabulary words.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here