Search Results for author: Brendan Jou

Found 14 papers, 7 papers with code

LanSER: Language-Model Supported Speech Emotion Recognition

no code implementations7 Sep 2023 Taesik Gong, Josh Belanich, Krishna Somandepalli, Arsha Nagrani, Brian Eoff, Brendan Jou

Speech emotion recognition (SER) models typically rely on costly human-labeled data for training, making scaling methods to large speech datasets and nuanced emotion taxonomies difficult.

Automatic Speech Recognition Language Modelling +5

Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers

no code implementations24 Jun 2022 Josh Belanich, Krishna Somandepalli, Brian Eoff, Brendan Jou

This technical report presents the modeling approaches used in our submission to the ICML Expressive Vocalizations Workshop & Competition multitask track (ExVo-MultiTask).

Event Detection Image Classification +2

DISSECT: Disentangled Simultaneous Explanations via Concept Traversals

1 code implementation ICLR 2022 Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard

Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars.

counterfactual Fairness +2

Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias

1 code implementation20 Sep 2019 Asma Ghandeharioun, Brian Eoff, Brendan Jou, Rosalind W. Picard

Supporting model interpretability for complex phenomena where annotators can legitimately disagree, such as emotion recognition, is a challenging machine learning task.

Emotion Recognition

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks

3 code implementations ICLR 2018 Victor Campos, Brendan Jou, Xavier Giro-i-Nieto, Jordi Torres, Shih-Fu Chang

We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph.

More cat than cute? Interpretable Prediction of Adjective-Noun Pairs

1 code implementation21 Aug 2017 Delia Fernandez, Alejandro Woodward, Victor Campos, Xavier Giro-i-Nieto, Brendan Jou, Shih-Fu Chang

This work aims at disentangling the contributions of the `adjectives' and `nouns' in the visual prediction of ANPs.

Multilingual Visual Sentiment Concept Matching

no code implementations7 Jun 2016 Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu, Tao Chen, Shih-Fu Chang

The impact of culture in visual emotion perception has recently captured the attention of multimedia research.

16k Clustering +2

Going Deeper for Multilingual Visual Sentiment Detection

no code implementations30 May 2016 Brendan Jou, Shih-Fu Chang

In the original MVSO release, adjective-noun pair (ANP) detectors were trained for the six languages using an AlexNet-styled architecture by fine-tuning from DeepSentiBank.

TAG

From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction

2 code implementations12 Apr 2016 Victor Campos, Brendan Jou, Xavier Giro-i-Nieto

Visual multimedia have become an inseparable part of our digital social lives, and they often capture moments tied with deep affections.

Sentiment Analysis Visual Sentiment Prediction

Deep Cross Residual Learning for Multitask Visual Recognition

1 code implementation5 Apr 2016 Brendan Jou, Shih-Fu Chang

We propose a novel extension of residual learning for deep networks that enables intuitive learning across multiple related tasks using cross-connections called cross-residuals.

Object Recognition

Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology

no code implementations16 Aug 2015 Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, Shih-Fu Chang

Our work expressly focuses on the uniqueness of culture and language in relation to human affect, specifically sentiment and emotion semantics, and how they manifest in social multimedia.

Cultural Vocal Bursts Intensity Prediction

Robust Object Co-detection

no code implementations CVPR 2013 Xin Guo, Dong Liu, Brendan Jou, Mojun Zhu, Anni Cai, Shih-Fu Chang

Object co-detection aims at simultaneous detection of objects of the same category from a pool of related images by exploiting consistent visual patterns present in candidate objects in the images.

Clustering Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.