We propose a graph-based model for joint morphological parsing and dependency parsing in Sanskrit.
Ours is a search-based structured prediction framework, which expects a graph as input, where relevant linguistic information is encoded in the nodes, and the edges are then used to indicate the association between these nodes.
Symbolic knowledge can provide crucial inductive bias for training neural models, especially in low data regimes.
In this work, we introduce X-FACT: the largest publicly available multilingual dataset for factual verification of naturally existing real-world claims.
In this work, we focus on dependency parsing for morphological rich languages (MRLs) in a low-resource setting.
In this paper, we study the response of large models from the BERT family to incoherent inputs that should confuse any model that claims to understand natural language.
We compare the performance of each of the models in a low-resource setting, with 1, 500 sentences for training.
In this paper, we propose a method for detection of metaphors at the token level using a hybrid model of Bidirectional-LSTM and CRF.