Search Results for author: Amir Zadeh

Found 33 papers, 17 papers with code

MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos

5 code implementations20 Jun 2016 Amir Zadeh, Rowan Zellers, Eli Pincus, Louis-Philippe Morency

This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinion-level Sentiment Intensity dataset (MOSI).

Sentiment Analysis Subjectivity Analysis

Convolutional Experts Constrained Local Model for Facial Landmark Detection

1 code implementation26 Nov 2016 Amir Zadeh, Tadas Baltrušaitis, Louis-Philippe Morency

In our work, we present a novel local detector -- Convolutional Experts Network (CEN) -- that brings together the advantages of neural architectures and mixtures of experts in an end-to-end framework.

Facial Landmark Detection

Combating Human Trafficking with Deep Multimodal Models

no code implementations8 May 2017 Edmund Tong, Amir Zadeh, Cara Jones, Louis-Philippe Morency

Human trafficking is a global epidemic affecting millions of people across the planet.

Tensor Fusion Network for Multimodal Sentiment Analysis

1 code implementation EMNLP 2017 Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, Louis-Philippe Morency

Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language.

Multimodal Sentiment Analysis

Memory Fusion Network for Multi-view Sequential Learning

2 code implementations3 Feb 2018 Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, Louis-Philippe Morency

In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time.

Learning Factorized Multimodal Representations

2 code implementations ICLR 2019 Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov

Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction.

Representation Learning

Multimodal Language Analysis with Recurrent Multistage Fusion

1 code implementation EMNLP 2018 Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency

In this paper, we propose the Recurrent Multistage Fusion Network (RMFN) which decomposes the fusion problem into multiple stages, each of them focused on a subset of multimodal signals for specialized, effective fusion.

Emotion Recognition Multimodal Sentiment Analysis

Variational Auto-Decoder: A Method for Neural Generative Modeling from Incomplete Data

no code implementations3 Mar 2019 Amir Zadeh, Yao-Chong Lim, Paul Pu Liang, Louis-Philippe Morency

We study a specific implementation of the Auto-Encoding Variational Bayes (AEVB) algorithm, named in this paper as a Variational Auto-Decoder (VAD).

Factorized Multimodal Transformer for Multimodal Sequential Learning

no code implementations22 Nov 2019 Amir Zadeh, Chengfeng Mao, Kelly Shi, Yiwei Zhang, Paul Pu Liang, Soujanya Poria, Louis-Philippe Morency

As machine learning leaps towards better generalization to real world, multimodal sequential learning becomes a fundamental research area.

Pseudo-Encoded Stochastic Variational Inference

no code implementations19 Dec 2019 Amir Zadeh, Smon Hessner, Yao-Chong Lim, Louis-Phlippe Morency

Posterior inference in directed graphical models is commonly done using a probabilistic encoder (a. k. a inference model) conditioned on the input.

Variational Inference

Improving Aspect-Level Sentiment Analysis with Aspect Extraction

no code implementations3 May 2020 Navonil Majumder, Rishabh Bhardwaj, Soujanya Poria, Amir Zadeh, Alexander Gelbukh, Amir Hussain, Louis-Philippe Morency

Aspect-based sentiment analysis (ABSA), a popular research area in NLP has two distinct parts -- aspect extraction (AE) and labeling the aspects with sentiment polarity (ALSA).

Aspect-Based Sentiment Analysis Aspect Extraction +1

What Gives the Answer Away? Question Answering Bias Analysis on Video QA Datasets

no code implementations7 Jul 2020 Jianing Yang, Yuying Zhu, Yongxin Wang, Ruitao Yi, Amir Zadeh, Louis-Philippe Morency

In this paper, we analyze QA biases in popular video question answering datasets and discover pretrained language models can answer 37-48% questions correctly without using any multimodal context information, far exceeding the 20% random guess baseline for 5-choose-1 multiple-choice questions.

Multiple-choice Question Answering +1

Multimodal Research in Vision and Language: A Review of Current and Emerging Trends

no code implementations19 Oct 2020 Shagun Uppal, Sarthak Bhagat, Devamanyu Hazarika, Navonil Majumdar, Soujanya Poria, Roger Zimmermann, Amir Zadeh

Deep Learning and its applications have cascaded impactful research and development with a diverse range of modalities present in the real-world data.

MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences

1 code implementation NAACL 2021 Jianing Yang, Yongxin Wang, Ruitao Yi, Yuying Zhu, Azaan Rehman, Amir Zadeh, Soujanya Poria, Louis-Philippe Morency

Human communication is multimodal in nature; it is through multiple modalities such as language, voice, and facial expressions, that opinions and emotions are expressed.

Emotion Recognition Multimodal Sentiment Analysis

StarNet: Gradient-free Training of Deep Generative Models using Determined System of Linear Equations

no code implementations3 Jan 2021 Amir Zadeh, Santiago Benoit, Louis-Philippe Morency

In this paper we present an approach for training deep generative models solely based on solving determined systems of linear equations.

Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis

2 code implementations28 Jul 2021 Wei Han, Hui Chen, Alexander Gelbukh, Amir Zadeh, Louis-Philippe Morency, Soujanya Poria

Multimodal sentiment analysis aims to extract and integrate semantic information collected from multiple modalities to recognize the expressed emotions and sentiment in multimodal data.

Multimodal Deep Learning Multimodal Sentiment Analysis

M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations

1 code implementation3 Aug 2021 Dushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria

We propose several strong multimodal baselines and show the importance of contextual and multimodal information for humor recognition in conversations.

Dialogue Understanding

Relay Variational Inference: A Method for Accelerated Encoderless VI

no code implementations26 Oct 2021 Amir Zadeh, Santiago Benoit, Louis-Philippe Morency

We find RVI to be a unique tool, often superior in both performance and convergence speed to previously proposed encoderless as well as amortized VI models (e. g. VAE).

Imputation Variational Inference

Face-to-Face Contrastive Learning for Social Intelligence Question-Answering

no code implementations29 Jul 2022 Alex Wilf, Martin Q. Ma, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency

Creating artificial social intelligence - algorithms that can understand the nuances of multi-person interactions - is an exciting and emerging challenge in processing facial expressions and gestures from multimodal videos.

Contrastive Learning Question Answering

Foundations and Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions

no code implementations7 Sep 2022 Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency

With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities.

Text-to-Image Generation Video Understanding

Evaluating Parameter-Efficient Transfer Learning Approaches on SURE Benchmark for Speech Understanding

1 code implementation2 Mar 2023 Yingting Li, Ambuj Mehrish, Shuai Zhao, Rishabh Bhardwaj, Amir Zadeh, Navonil Majumder, Rada Mihalcea, Soujanya Poria

To mitigate this issue, parameter-efficient transfer learning algorithms, such as adapters and prefix tuning, have been proposed as a way to introduce a few trainable parameters that can be plugged into large pre-trained language models such as BERT, and HuBERT.

Speech Synthesis Transfer Learning

Tutorial on Multimodal Machine Learning

no code implementations NAACL (ACL) 2022 Louis-Philippe Morency, Paul Pu Liang, Amir Zadeh

Multimodal machine learning involves integrating and modeling information from multiple heterogeneous sources of data.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.