Search Results for author: Anoop Sarkar

Found 38 papers, 8 papers with code

Effectively pretraining a speech translation decoder with Machine Translation data

no code implementations EMNLP 2020 Ashkan Alinejad, Anoop Sarkar

Directly translating from speech to text using an end-to-end approach is still challenging for many language pairs due to insufficient data.

Automatic Speech Recognition Machine Translation +1

Translation-based Supervision for Policy Generation in Simultaneous Neural Machine Translation

1 code implementation EMNLP 2021 Ashkan Alinejad, Hassan S. Shavarani, Anoop Sarkar

In simultaneous machine translation, finding an agent with the optimal action sequence of reads and writes that maintain a high level of translation quality while minimizing the average lag in producing target tokens remains an extremely challenging problem.

Action Generation Machine Translation +1

Better Neural Machine Translation by Extracting Linguistic Information from BERT

1 code implementation EACL 2021 Hassan S. Shavarani, Anoop Sarkar

Adding linguistic information (syntax or semantics) to neural machine translation (NMT) has mostly focused on using point estimates from pre-trained models.

Machine Translation Translation

Measuring and Improving Faithfulness of Attention in Neural Machine Translation

no code implementations EACL 2021 Pooya Moradi, Nishant Kambhatla, Anoop Sarkar

While the attention heatmaps produced by neural machine translation (NMT) models seem insightful, there is little evidence that they reflect a model{'}s true internal reasoning.

Machine Translation Translation

Deconstructing Supertagging into Multi-Task Sequence Prediction

no code implementations CONLL 2019 Zhenqi Zhu, Anoop Sarkar

Supertagging is a sequence prediction task where each word is assigned a piece of complex syntactic structure called a supertag.

Multi-Task Learning TAG

Pointer-based Fusion of Bilingual Lexicons into Neural Machine Translation

1 code implementation17 Sep 2019 Jetic Gū, Hassan S. Shavarani, Anoop Sarkar

Neural machine translation (NMT) systems require large amounts of high quality in-domain parallel corpora for training.

Language Modelling Machine Translation +1

Sign Clustering and Topic Extraction in Proto-Elamite

1 code implementation WS 2019 Logan Born, Kate Kelley, Nishant Kambhatla, Carolyn Chen, Anoop Sarkar

We describe a first attempt at using techniques from computational linguistics to analyze the undeciphered proto-Elamite script.

Decipherment Topic Models

In-domain Context-aware Token Embeddings Improve Biomedical Named Entity Recognition

no code implementations WS 2018 Golnar Sheikhshabbafghi, Inanc Birol, Anoop Sarkar

Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP).

Language Modelling Named Entity Recognition +4

Prediction Improves Simultaneous Neural Machine Translation

no code implementations EMNLP 2018 Ashkan Alinejad, Maryam Siahbani, Anoop Sarkar

Simultaneous speech translation aims to maintain translation quality while minimizing the delay between reading input and incrementally producing the output.

Machine Translation reinforcement-learning +1

Decipherment for Adversarial Offensive Language Detection

no code implementations WS 2018 Zhelun Wu, Nishant Kambhatla, Anoop Sarkar

Automated filters are commonly used by online services to stop users from sending age-inappropriate, bullying messages, or asking others to expose personal information.

Decipherment Spelling Correction

Top-down Tree Structured Decoding with Syntactic Connections for Neural Machine Translation and Parsing

no code implementations EMNLP 2018 Jetic Gū, Hassan S. Shavarani, Anoop Sarkar

The addition of syntax-aware decoding in Neural Machine Translation (NMT) systems requires an effective tree-structured neural network, a syntax-aware attention model and a language generation model that is sensitive to sentence structure.

Constituency Parsing Dependency Parsing +3

Prefix Lexicalization of Synchronous CFGs using Synchronous TAG

no code implementations ACL 2018 Logan Born, Anoop Sarkar

We show that an epsilon-free, chain-free synchronous context-free grammar (SCFG) can be converted into a weakly equivalent synchronous tree-adjoining grammar (STAG) which is prefix lexicalized.

Machine Translation TAG

Cannot find the paper you are looking for? You can Submit a new open access paper.