Search Results for author: Mohammad Soleymani

Found 12 papers, 5 papers with code

Multimodal Phased Transformer for Sentiment Analysis

1 code implementation EMNLP 2021 Junyan Cheng, Iordanis Fostiropoulos, Barry Boehm, Mohammad Soleymani

We evaluate our model with three sentiment analysis datasets and achieve comparable or superior performance compared with the existing methods, with a 90% reduction in the number of parameters.

Sentiment Analysis

Analysis of Behavior Classification in Motivational Interviewing

no code implementations NAACL (CLPsych) 2021 Leili Tavabi, Trang Tran, Kalin Stefanov, Brian Borsari, Joshua Woolley, Stefan Scherer, Mohammad Soleymani

Analysis of client and therapist behavior in counseling sessions can provide helpful insights for assessing the quality of the session and consequently, the client’s behavioral outcome.

Classification

Towards Privacy-Preserving Speech Representation for Client-Side Data Sharing

1 code implementation26 Mar 2022 Minh Tran, Mohammad Soleymani

Privacy and security are major concerns when sharing and collecting speech data for cloud services such as automatic speech recognition (ASR) and speech emotion recognition (SER).

Automatic Speech Recognition Intent Classification +5

A Pre-trained Audio-Visual Transformer for Emotion Recognition

no code implementations23 Jan 2022 Minh Tran, Mohammad Soleymani

In this paper, we introduce a pretrained audio-visual Transformer trained on more than 500k utterances from nearly 4000 celebrities from the VoxCeleb2 dataset for human behavior understanding.

Emotion Classification Emotion Recognition

Speaker Turn Modeling for Dialogue Act Classification

1 code implementation Findings (EMNLP) 2021 Zihao He, Leili Tavabi, Kristina Lerman, Mohammad Soleymani

Dialogue Act (DA) classification is the task of classifying utterances with respect to the function they serve in a dialogue.

Classification Dialogue Act Classification

Improper Gaussian Signaling for the $K$-user MIMO Interference Channels with Hardware Impairments

no code implementations28 Jan 2020 Mohammad Soleymani, Ignacio Santamaria, Peter J. Schreier

This paper investigates the performance of improper Gaussian signaling (IGS) for the $K$-user multiple-input, multiple-output (MIMO) interference channel (IC) with hardware impairments (HWI).

Affective Computing for Large-Scale Heterogeneous Multimedia Data: A Survey

no code implementations3 Oct 2019 Sicheng Zhao, Shangfei Wang, Mohammad Soleymani, Dhiraj Joshi, Qiang Ji

Affective computing (AC) of these data can help to understand human behaviors and enable wide applications.

AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition

no code implementations10 Jul 2019 Fabien Ringeval, Björn Schuller, Michel Valstar, NIcholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Messner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic

The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) "State-of-Mind, Detecting Depression with AI, and Cross-cultural Affect Recognition" is the ninth competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audiovisual health and emotion analysis, with all participants competing strictly under the same conditions.

Emotion Recognition

Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval

1 code implementation CVPR 2019 Yale Song, Mohammad Soleymani

In this work, we introduce Polysemous Instance Embedding Networks (PIE-Nets) that compute multiple and diverse representations of an instance by combining global context with locally-guided features via multi-head self-attention and residual learning.

Cross-Modal Retrieval Multiple Instance Learning +1

Cross-Modal Retrieval with Implicit Concept Association

no code implementations12 Apr 2018 Yale Song, Mohammad Soleymani

Traditional cross-modal retrieval assumes explicit association of concepts across modalities, where there is no ambiguity in how the concepts are linked to each other, e. g., when we do the image search with a query "dogs", we expect to see dog images.

Cross-Modal Retrieval Image Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.