Search Results for author: Siddhartha Brahma

Found 10 papers, 1 papers with code

CLAR: A Cross-Lingual Argument Regularizer for Semantic Role Labeling

no code implementations Findings of the Association for Computational Linguistics 2020 Ishan Jindal, Yunyao Li, Siddhartha Brahma, Huaiyu Zhu

Although different languages have different argument annotations, polyglot training, the idea of training one model on multiple languages, has previously been shown to outperform monolingual baselines, especially for low resource languages.

Semantic Role Labeling

Improved Language Modeling by Decoding the Past

no code implementations ACL 2019 Siddhartha Brahma

With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55. 6 on the Penn Treebank and 63. 5 on the WikiText-2 datasets using a single softmax.

Language Modelling

REGMAPR - Text Matching Made Easy

no code implementations13 Aug 2018 Siddhartha Brahma

We propose REGMAPR - a simple and general architecture for text matching that does not use inter-sentence attention.

Natural Language Inference Text Matching

Unsupervised Learning of Sentence Representations Using Sequence Consistency

no code implementations10 Aug 2018 Siddhartha Brahma

Computing universal distributed representations of sentences is a fundamental task in natural language processing.

Transfer Learning

Improved Sentence Modeling using Suffix Bidirectional LSTM

no code implementations18 May 2018 Siddhartha Brahma

We propose a general and effective improvement to the BiLSTM model which encodes each suffix and prefix of a sequence of tokens in both forward and reverse directions.

Classification General Classification +3

SufiSent - Universal Sentence Representations Using Suffix Encodings

no code implementations20 Feb 2018 Siddhartha Brahma

Computing universal distributed representations of sentences is a fundamental task in natural language processing.

Natural Language Inference

On the scaling of polynomial features for representation matching

no code implementations20 Feb 2018 Siddhartha Brahma

In many neural models, new features as polynomial functions of existing ones are used to augment representations.

General Classification Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.