no code implementations • 2 Aug 2021 • Vivek Gupta, Riyaz A. Bhat, Atreya Ghosal, Manish Shrivastava, Maneesh Singh, Vivek Srikumar
Our experiments demonstrate that a RoBERTa-based model, representative of the current state-of-the-art, fails at reasoning on the following counts: it (a) ignores relevant parts of the evidence, (b) is over-sensitive to annotation artifacts, and (c) relies on the knowledge encoded in the pre-trained language model rather than the evidence presented in its tabular inputs.
no code implementations • CONLL 2018 • Riyaz A. Bhat, Irshad Bhat, Srinivas Bangalore
While segmentation is learned separately, we use neural stacking for joint learning of POS tagging and parsing tasks.
no code implementations • WS 2017 • Riyaz A. Bhat, Irshad Bhat, Dipti Sharma
We investigate the problem of parsing conversational data of morphologically-rich languages such as Hindi where argument scrambling occurs frequently.
no code implementations • COLING 2016 • Riyaz A. Bhat, Irshad A. Bhat, Naman jain, Dipti Misra Sharma
With respect to text processing, addressing the differences between the Hindi and Urdu texts would be beneficial in the following ways: (a) instead of training separate models, their individual resources can be augmented to train single, unified models for better generalization, and (b) their individual text processing applications can be used interchangeably under varied resource conditions.