Search Results for author: Rahul Aralikatte

Found 23 papers, 9 papers with code

How far can we get with one GPU in 100 hours? CoAStaL at MultiIndicMT Shared Task

no code implementations ACL (WAT) 2021 Rahul Aralikatte, Héctor Ricardo Murrieta Bello, Miryam de Lhoneux, Daniel Hershcovich, Marcel Bollmann, Anders Søgaard

This work shows that competitive translation results can be obtained in a constrained setting by incorporating the latest advances in memory and compute optimization.

Translation

Minimax and Neyman-Pearson Meta-Learning for Outlier Languages

1 code implementation2 Jun 2021 Edoardo Maria Ponti, Rahul Aralikatte, Disha Shrivastava, Siva Reddy, Anders Søgaard

In fact, under a decision-theoretic framework, MAML can be interpreted as minimising the expected risk across training languages (with a uniform prior), which is known as Bayes criterion.

Meta-Learning Part-Of-Speech Tagging +1

Joint Semantic Analysis with Document-Level Cross-Task Coherence Rewards

1 code implementation12 Oct 2020 Rahul Aralikatte, Mostafa Abdou, Heather Lent, Daniel Hershcovich, Anders Søgaard

Coreference resolution and semantic role labeling are NLP tasks that capture different aspects of semantics, indicating respectively, which expressions refer to the same entity, and what semantic roles expressions serve in the sentence.

Coreference Resolution Natural Language Understanding +1

Compositional Generalization in Image Captioning

1 code implementation CONLL 2019 Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, Desmond Elliott

Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts.

Image Captioning

Rewarding Coreference Resolvers for Being Consistent with World Knowledge

1 code implementation IJCNLP 2019 Rahul Aralikatte, Heather Lent, Ana Valeria Gonzalez, Daniel Hershcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, Anders Søgaard

Unresolved coreference is a bottleneck for relation extraction, and high-quality coreference resolvers may produce an output that makes it a lot easier to extract knowledge triples.

reinforcement-learning Relation Extraction

Ellipsis Resolution as Question Answering: An Evaluation

1 code implementation EACL 2021 Rahul Aralikatte, Matthew Lamm, Daniel Hardt, Anders Søgaard

Most, if not all forms of ellipsis (e. g., so does Mary) are similar to reading comprehension questions (what does Mary do), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse.

Coreference Resolution Machine Reading Comprehension +2

X-WikiRE: A Large, Multilingual Resource for Relation Extraction as Machine Comprehension

1 code implementation WS 2019 Mostafa Abdou, Cezar Sas, Rahul Aralikatte, Isabelle Augenstein, Anders Søgaard

Although the vast majority of knowledge bases KBs are heavily biased towards English, Wikipedias do cover very different topics in different languages.

Reading Comprehension Relation Extraction

Model-based annotation of coreference

1 code implementation LREC 2020 Rahul Aralikatte, Anders Søgaard

Humans do not make inferences over texts, but over models of what texts are about.

Coreference Resolution

A Visual Programming Paradigm for Abstract Deep Learning Model Development

no code implementations7 May 2019 Srikanth Tamilselvam, Naveen Panwar, Shreya Khare, Rahul Aralikatte, Anush Sankaran, Senthil Mani

Deep learning is one of the fastest growing technologies in computer science with a plethora of applications.

Adversarial Black-Box Attacks on Automatic Speech Recognition Systems using Multi-Objective Evolutionary Optimization

no code implementations4 Nov 2018 Shreya Khare, Rahul Aralikatte, Senthil Mani

Fooling deep neural networks with adversarial input have exposed a significant vulnerability in the current state-of-the-art systems in multiple domains.

Automatic Speech Recognition

DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension

1 code implementation ACL 2018 Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, Karthik Sankaranarayanan

We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets.

Reading Comprehension

DeepTriage: Exploring the Effectiveness of Deep Learning for Bug Triaging

no code implementations4 Jan 2018 Senthil Mani, Anush Sankaran, Rahul Aralikatte

Using an attention mechanism enables the model to learn the context representation over a long word sequence, as in a bug report.

Sanskrit Sandhi Splitting using seq2(seq)^2

no code implementations1 Jan 2018 Rahul Aralikatte, Neelamadhav Gantayat, Naveen Panwar, Anush Sankaran, Senthil Mani

In Sanskrit, small words (morphemes) are combined to form compound words through a process known as Sandhi.

Chinese Word Segmentation

mAnI: Movie Amalgamation using Neural Imitation

no code implementations16 Aug 2017 Naveen Panwar, Shreya Khare, Neelamadhav Gantayat, Rahul Aralikatte, Senthil Mani, Anush Sankaran

Cross-modal data retrieval has been the basis of various creative tasks performed by Artificial Intelligence (AI).

Fault in your stars: An Analysis of Android App Reviews

no code implementations16 Aug 2017 Rahul Aralikatte, Giriprasad Sridhara, Neelamadhav Gantayat, Senthil Mani

Further, we developed three systems; two of which were based on traditional machine learning and one on deep learning to automatically identify reviews whose rating did not match with the opinion expressed in the review.

Phoenix: A Self-Optimizing Chess Engine

no code implementations30 Mar 2016 Rahul Aralikatte, G. Srinivasaraghavan

With the advent of deep learning, chess playing agents can surpass human ability with relative ease.

Game of Chess

Cannot find the paper you are looking for? You can Submit a new open access paper.