Search Results for author: Joel Ruben Antony Moniz

Found 15 papers, 5 papers with code

On Efficiently Acquiring Annotations for Multilingual Models

1 code implementation ACL 2022 Joel Ruben Antony Moniz, Barun Patra, Matthew R. Gormley

When tasked with supporting multiple languages for a given problem, two approaches have arisen: training a model for each language with the annotation budget divided equally among them, and training on a high-resource language followed by zero-shot transfer to the remaining languages.

Active Learning Dependency Parsing

MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants

no code implementations13 Oct 2021 Alkesh Patel, Joel Ruben Antony Moniz, Roman Nguyen, Nick Tzou, Hadas Kotek, Vincent Renkens

In multimodal assistant, where vision is also one of the input modalities, the identification of user intent becomes a challenging task as visual input can influence the outcome.

Intent Classification Question Answering +4

CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues

1 code implementation NAACL 2021 Bo-Hsiang Tseng, Shruti Bhargava, Jiarui Lu, Joel Ruben Antony Moniz, Dhivya Piraviperumal, Lin Li, Hong Yu

In this work, we propose a novel joint learning framework of modeling coreference resolution and query rewriting for complex, multi-turn dialogue understanding.

coreference-resolution Coreference Resolution +1

Learning to Relate from Captions and Bounding Boxes

no code implementations ACL 2019 Sarthak Garg, Joel Ruben Antony Moniz, Anshu Aviral, Priyatham Bollimpalli

In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision.

Image Captioning Relation Classification

LucidDream: Controlled Temporally-Consistent DeepDream on Videos

no code implementations27 Nov 2019 Joel Ruben Antony Moniz, Eunsu Kang, Barnabás Póczos

In this work, we aim to propose a set of techniques to improve the controllability and aesthetic appeal when DeepDream, which uses a pre-trained neural network to modify images by hallucinating objects into them, is applied to videos.

Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces

1 code implementation ACL 2019 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Compression and Localization in Reinforcement Learning for ATARI Games

no code implementations20 Apr 2019 Joel Ruben Antony Moniz, Barun Patra, Sarthak Garg

Deep neural networks have become commonplace in the domain of reinforcement learning, but are often expensive in terms of the number of parameters needed.

Atari Games Model Compression +2

BLISS in Non-Isometric Embedding Spaces

no code implementations27 Sep 2018 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Unsupervised Depth Estimation, 3D Face Rotation and Replacement

1 code implementation NeurIPS 2018 Joel Ruben Antony Moniz, Christopher Beckham, Simon Rajotte, Sina Honari, Christopher Pal

We present an unsupervised approach for learning to estimate three dimensional (3D) facial structure from a single image while also predicting 3D viewpoint transformations that match a desired pose and facial geometry.

Depth Estimation Translation

Nested LSTMs

1 code implementation31 Jan 2018 Joel Ruben Antony Moniz, David Krueger

We propose Nested LSTMs (NLSTM), a novel RNN architecture with multiple levels of memory.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.