Search Results for author: Vincent Karas

Found 6 papers, 1 papers with code

Computational Charisma -- A Brick by Brick Blueprint for Building Charismatic Artificial Intelligence

no code implementations31 Dec 2022 Björn W. Schuller, Shahin Amiriparian, Anton Batliner, Alexander Gebhard, Maurice Gerzcuk, Vincent Karas, Alexander Kathan, Lennart Seizer, Johanna Löchner

We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.

Dynamic Restrained Uncertainty Weighting Loss for Multitask Learning of Vocal Expression

no code implementations22 Jun 2022 Meishu Song, Zijiang Yang, Andreas Triantafyllopoulos, Xin Jing, Vincent Karas, Xie Jiangjian, Zixing Zhang, Yamamoto Yoshiharu, Bjoern W. Schuller

We propose a novel Dynamic Restrained Uncertainty Weighting Loss to experimentally handle the problem of balancing the contributions of multiple tasks on the ICML ExVo 2022 Challenge.

Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition

no code implementations24 Mar 2022 Vincent Karas, Mani Kumar Tellamekala, Adria Mallol-Ragolta, Michel Valstar, Björn W. Schuller

To clearly understand the performance differences between recurrent and attention models in audiovisual affect recognition, we present a comprehensive evaluation of fusion models based on LSTM-RNNs, self-attention and cross-modal attention, trained for valence and arousal estimation.

Arousal Estimation Multimodal Emotion Recognition

Domain Adaptation with Joint Learning for Generic, Optical Car Part Recognition and Detection Systems (Go-CaRD)

no code implementations15 Jun 2020 Lukas Stappen, Xinchen Du, Vincent Karas, Stefan Müller, Björn W. Schuller

Systems for the automatic recognition and detection of automotive parts are crucial in several emerging research areas in the development of intelligent vehicles.

Benchmarking Domain Adaptation +1

A Novel Fusion of Attention and Sequence to Sequence Autoencoders to Predict Sleepiness From Speech

1 code implementation15 May 2020 Shahin Amiriparian, Pawel Winokurow, Vincent Karas, Sandra Ottl, Maurice Gerczuk, Björn W. Schuller

On the development partition of the data, we achieve Spearman's correlation coefficients of . 324, . 283, and . 320 with the targets on the Karolinska Sleepiness Scale by utilising attention and non-attention autoencoders, and the fusion of both autoencoders' representations, respectively.

Machine Translation Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.