Paper

Minimal Supervision for Morphological Inflection

Neural models for the various flavours of morphological inflection tasks have proven to be extremely accurate given ample labeled data -- data that may be slow and costly to obtain. In this work we aim to overcome this annotation bottleneck by bootstrapping labeled data from a seed as little as {\em five} labeled paradigms, accompanied by a large bulk of unlabeled text. Our approach exploits different kinds of regularities in morphological systems in a two-phased setup, where word tagging based on {\em analogies} is followed by word pairing based on {\em distances}. We experiment with the Paradigm Cell Filling Problem over eight typologically different languages, and find that, in languages with relatively simple morphology, orthographic regularities on their own allow inflection models to achieve respectable accuracy. Combined orthographic and semantic regularities alleviate difficulties with particularly complex morpho-phonological systems. Our results suggest that hand-crafting many tagged examples might be an unnecessary effort. However, more work is needed in order to address rarely used forms.

Results in Papers With Code
(↓ scroll down to see all results)