This paper presents the IITKGP contribution at the Technical DOmain Identification (TechDOfication) shared task at ICON 2020.
We present an approach for cross-lingual transfer of dependency parser so that the parser trained on a single source language can more effectively cater to diverse target languages.
Learning meaningful representations for chirographic drawing data such as sketches, handwriting, and flowcharts is a gateway for understanding and emulating human creative expression.
In this paper, we show that demographic noise may, in fact, promote abrupt transitions in systems that would otherwise show continuous transitions.
Analysis of human sketches in deep learning has advanced immensely through the use of waypoint-sequences rather than raster-graphic representations.
The study of neural generative models of human sketches is a fascinating contemporary modeling problem due to the links between sketch image generation and the human drawing process.
We present a shallow parser guided cross-lingual model transfer approach in order to address the syntactic differences between source and target languages more effectively.
To avoid character segmentation in such scripts, HMM-based sequence modeling has been used earlier in holistic way.
This paper describes our dependency parsing system in CoNLL-2017 shared task on Multilingual Parsing from Raw Text to Universal Dependencies.
Neural machine translation (NMT) models have recently been shown to be very successful in machine translation (MT).
A parser is trained and applied to the Hindi sentences of the parallel corpus and the parse trees are projected to construct probable parse trees of the corresponding Bengali sentences.
Experimental results on a variety of toy and real world datasets show that our approach is significantly more accurate than parameter averaging for high number of partitions.