Search Results for author: Takahisa Yamamoto

Found 4 papers, 1 papers with code

Action Units Recognition Using Improved Pairwise Deep Architecture

no code implementations7 Jul 2021 Junya Saito, Xiaoyu Mi, Akiyoshi Uchida, Sachihiro Youoku, Takahisa Yamamoto, Kentaro Murase, Osafumi Nakayama

Facial Action Units (AUs) represent a set of facial muscular activities and various combinations of AUs can represent a wide range of emotions.

Marketing

Multi-modal Affect Analysis using standardized data within subjects in the Wild

no code implementations7 Jul 2021 Sachihiro Youoku, Takahisa Yamamoto, Junya Saito, Akiyoshi Uchida, Xiaoyu Mi, Ziqiang Shi, Liu Liu, Zhongling Liu, Osafumi Nakayama, Kentaro Murase

Therefore, after learning the common features for each frame, we constructed a facial expression estimation model and valence-arousal model using time-series data after combining the common features and the standardized features for each video.

Time Series Time Series Analysis

Action Units Recognition by Pairwise Deep Architecture

no code implementations1 Oct 2020 Junya Saito, Ryosuke Kawamura, Akiyoshi Uchida, Sachihiro Youoku, Yuushi Toyoda, Takahisa Yamamoto, Xiaoyu Mi, Kentaro Murase

In this paper, we propose a new automatic Action Units (AUs) recognition method used in a competition, Affective Behavior Analysis in-the-wild (ABAW).

Cannot find the paper you are looking for? You can Submit a new open access paper.