Search Results for author: Adria Recasens

Found 9 papers, 5 papers with code

HiP: Hierarchical Perceiver

2 code implementations22 Feb 2022 Joao Carreira, Skanda Koppula, Daniel Zoran, Adria Recasens, Catalin Ionescu, Olivier Henaff, Evan Shelhamer, Relja Arandjelovic, Matt Botvinick, Oriol Vinyals, Karen Simonyan, Andrew Zisserman, Andrew Jaegle

This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video.

Context Based Emotion Recognition using EMOTIC Dataset

3 code implementations30 Mar 2020 Ronak Kosti, Jose M. Alvarez, Adria Recasens, Agata Lapedriza

In this paper we present EMOTIC, a dataset of images of people in a diverse set of natural situations, annotated with their apparent emotion.

Ranked #3 on Emotion Recognition in Context on EMOTIC (using extra training data)

Emotion Recognition in Context

Following Gaze in Video

no code implementations ICCV 2017 Adria Recasens, Carl Vondrick, Aditya Khosla, Antonio Torralba

In this paper, we present an approach for following gaze in video by predicting where a person (in the video) is looking even when the object is in a different frame.

Understanding Infographics through Textual and Visual Tag Prediction

1 code implementation26 Sep 2017 Zoya Bylinskii, Sami Alsheikh, Spandan Madan, Adria Recasens, Kimberli Zhong, Hanspeter Pfister, Fredo Durand, Aude Oliva

And second, we use these predicted text tags as a supervisory signal to localize the most diagnostic visual elements from within the infographic i. e. visual hashtags.


Emotion Recognition in Context

no code implementations CVPR 2017 Ronak Kosti, Jose M. Alvarez, Adria Recasens, Agata Lapedriza

In this paper we present the Emotions in Context Database (EMCO), a dataset of images containing people in context in non-controlled environments.

Emotion Recognition in Context

Where are they looking?

no code implementations NeurIPS 2015 Adria Recasens, Aditya Khosla, Carl Vondrick, Antonio Torralba

Humans have the remarkable ability to follow the gaze of other people to identify what they are looking at.

Cannot find the paper you are looking for? You can Submit a new open access paper.