Search Results for author: Joel Ruben Antony Moniz

Found 22 papers, 5 papers with code

ReALM: Reference Resolution As Language Modeling

no code implementations29 Mar 2024 Joel Ruben Antony Moniz, Soundarya Krishnan, Melis Ozyildirim, Prathamesh Saraf, Halim Cagri Ates, Yuan Zhang, Hong Yu, Nidhi Rajshree

Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds.

Language Modelling

SynthDST: Synthetic Data is All You Need for Few-Shot Dialog State Tracking

no code implementations3 Feb 2024 Atharva Kulkarni, Bo-Hsiang Tseng, Joel Ruben Antony Moniz, Dhivya Piraviperumal, Hong Yu, Shruti Bhargava

Remarkably, our few-shot learning approach recovers nearly $98%$ of the performance compared to the few-shot setup using human-annotated training data.

dialog state tracking Few-Shot Learning +2

Can Large Language Models Understand Context?

no code implementations1 Feb 2024 YIlun Zhu, Joel Ruben Antony Moniz, Shruti Bhargava, Jiarui Lu, Dhivya Piraviperumal, Site Li, Yuan Zhang, Hong Yu, Bo-Hsiang Tseng

Understanding context is key to understanding human language, an ability which Large Language Models (LLMs) have been increasingly seen to demonstrate to an impressive extent.

In-Context Learning Quantization

STEER: Semantic Turn Extension-Expansion Recognition for Voice Assistants

no code implementations25 Oct 2023 Leon Liyang Zhang, Jiarui Lu, Joel Ruben Antony Moniz, Aditya Kulkarni, Dhivya Piraviperumal, Tien Dung Tran, Nicholas Tzou, Hong Yu

In the context of a voice assistant system, steering refers to the phenomenon in which a user issues a follow-up command attempting to direct or clarify a previous turn.

Sentence

On Efficiently Acquiring Annotations for Multilingual Models

1 code implementation ACL 2022 Joel Ruben Antony Moniz, Barun Patra, Matthew R. Gormley

When tasked with supporting multiple languages for a given problem, two approaches have arisen: training a model for each language with the annotation budget divided equally among them, and training on a high-resource language followed by zero-shot transfer to the remaining languages.

Active Learning Dependency Parsing

MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants

no code implementations13 Oct 2021 Alkesh Patel, Joel Ruben Antony Moniz, Roman Nguyen, Nick Tzou, Hadas Kotek, Vincent Renkens

In multimodal assistant, where vision is also one of the input modalities, the identification of user intent becomes a challenging task as visual input can influence the outcome.

intent-classification Intent Classification +4

CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues

1 code implementation NAACL 2021 Bo-Hsiang Tseng, Shruti Bhargava, Jiarui Lu, Joel Ruben Antony Moniz, Dhivya Piraviperumal, Lin Li, Hong Yu

In this work, we propose a novel joint learning framework of modeling coreference resolution and query rewriting for complex, multi-turn dialogue understanding.

coreference-resolution Dialogue Understanding

Learning to Relate from Captions and Bounding Boxes

no code implementations ACL 2019 Sarthak Garg, Joel Ruben Antony Moniz, Anshu Aviral, Priyatham Bollimpalli

In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision.

Image Captioning Relation Classification

LucidDream: Controlled Temporally-Consistent DeepDream on Videos

no code implementations27 Nov 2019 Joel Ruben Antony Moniz, Eunsu Kang, Barnabás Póczos

In this work, we aim to propose a set of techniques to improve the controllability and aesthetic appeal when DeepDream, which uses a pre-trained neural network to modify images by hallucinating objects into them, is applied to videos.

Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces

1 code implementation ACL 2019 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Compression and Localization in Reinforcement Learning for ATARI Games

no code implementations20 Apr 2019 Joel Ruben Antony Moniz, Barun Patra, Sarthak Garg

Deep neural networks have become commonplace in the domain of reinforcement learning, but are often expensive in terms of the number of parameters needed.

Atari Games Model Compression +3

BLISS in Non-Isometric Embedding Spaces

no code implementations27 Sep 2018 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Unsupervised Depth Estimation, 3D Face Rotation and Replacement

1 code implementation NeurIPS 2018 Joel Ruben Antony Moniz, Christopher Beckham, Simon Rajotte, Sina Honari, Christopher Pal

We present an unsupervised approach for learning to estimate three dimensional (3D) facial structure from a single image while also predicting 3D viewpoint transformations that match a desired pose and facial geometry.

Depth Estimation Translation

Nested LSTMs

1 code implementation31 Jan 2018 Joel Ruben Antony Moniz, David Krueger

We propose Nested LSTMs (NLSTM), a novel RNN architecture with multiple levels of memory.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.