Search Results for author: Sarthak Garg

Found 6 papers, 2 papers with code

Efficient Inference For Neural Machine Translation

no code implementations6 Oct 2020 Yi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, Ilya Chatsviorkin

Large Transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field.

Machine Translation

Learning to Relate from Captions and Bounding Boxes

no code implementations ACL 2019 Sarthak Garg, Joel Ruben Antony Moniz, Anshu Aviral, Priyatham Bollimpalli

In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision.

Image Captioning Relation Classification

Empirical Evaluation of Active Learning Techniques for Neural MT

no code implementations WS 2019 Xiangkai Zeng, Sarthak Garg, Rajen Chatterjee, Udhyakumar Nallasamy, Matthias Paulik

Finally, we propose a neural extension for an AL sampling method used in the context of phrase-based MT - Round Trip Translation Likelihood (RTTL).

Active Learning Machine Translation +1

Jointly Learning to Align and Translate with Transformer Models

1 code implementation IJCNLP 2019 Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik

The state of the art in machine translation (MT) is governed by neural approaches, which typically provide superior translation accuracy over statistical approaches.

Machine Translation Word Alignment

Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces

1 code implementation ACL 2019 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Compression and Localization in Reinforcement Learning for ATARI Games

no code implementations20 Apr 2019 Joel Ruben Antony Moniz, Barun Patra, Sarthak Garg

Deep neural networks have become commonplace in the domain of reinforcement learning, but are often expensive in terms of the number of parameters needed.

Atari Games Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.