The phenomenon of compounding is ubiquitous in Sanskrit.
On the other hand, purely data-driven approaches do not match the performance of hybrid approaches due to the labelled data sparsity.
This data also can be used for a code-mixed machine translation task.
To effectively use such readily available resources, it is very much essential to perform a systematic study on word embedding approaches for the Sanskrit language.
In this work, we focus on dependency parsing for morphological rich languages (MRLs) in a low-resource setting.
We compare the performance of each of the models in a low-resource setting, with 1, 500 sentences for training.