1 code implementation • ECCV 2020 • Deng-Ping Fan, Yingjie Zhai, Ali Borji, Jufeng Yang, Ling Shao
In particular, we 1) propose a bifurcated backbone strategy (BBS) to split the multi-level features into teacher and student features, and 2) utilize a depth-enhanced module (DEM) to excavate informative parts of depth cues from the channel and spatial views.
no code implementations • ICCV 2023 • Zhicheng Zhang, Shengzhe Liu, Jufeng Yang
Specifically, we present a dual-branch network to track the visible part of planar objects, including vertexes and mask.
1 code implementation • CVPR 2023 • Zhicheng Zhang, Lijuan Wang, Jufeng Yang
Automatically predicting the emotions of user-generated videos (UGVs) receives increasing interest recently.
Ranked #3 on
Video Emotion Recognition
on Ekman6
1 code implementation • CVPR 2023 • Tinglei Feng, Jiaxuan Liu, Jufeng Yang
Pre-training of deep convolutional neural networks (DCNNs) plays a crucial role in the field of visual sentiment analysis (VSA).
1 code implementation • CVPR 2023 • Changsong Wen, Guoli Jia, Jufeng Yang
The distribution is generated from the latest data stored in the memory bank, which can adaptively model the difference of semantic similarity between sarcastic and non-sarcastic data.
no code implementations • ICCV 2023 • Changsong Wen, Xin Zhang, Xingxu Yao, Jufeng Yang
Therefore, we propose a new paradigm, termed ordinal label distribution learning (OLDL).
1 code implementation • CVPR 2023 • Xin Liu, Jufeng Yang
In the end, we develop a Neighbor Consistency Mining Network (NCMNet) for accurately recovering camera poses and identifying inliers.
no code implementations • 18 Aug 2021 • Sicheng Zhao, Guoli Jia, Jufeng Yang, Guiguang Ding, Kurt Keutzer
In this tutorial, we discuss several key aspects of multi-modal emotion recognition (MER).
no code implementations • 30 Jun 2021 • Sicheng Zhao, Xingxu Yao, Jufeng Yang, Guoli Jia, Guiguang Ding, Tat-Seng Chua, Björn W. Schuller, Kurt Keutzer
Images can convey rich semantics and induce various emotions in viewers.
no code implementations • ICCV 2021 • Xingxu Yao, Sicheng Zhao, Pengfei Xu, Jufeng Yang
To reduce annotation labor associated with object detection, an increasing number of studies focus on transferring the learned knowledge from a labeled source domain to another unlabeled target domain.
no code implementations • 19 Mar 2021 • Sicheng Zhao, Quanwei Huang, YouBao Tang, Xingxu Yao, Jufeng Yang, Guiguang Ding, Björn W. Schuller
Recently, extensive research efforts have been dedicated to understanding the emotions of images.
no code implementations • 25 Nov 2020 • Sicheng Zhao, Xuanbai Chen, Xiangyu Yue, Chuang Lin, Pengfei Xu, Ravi Krishna, Jufeng Yang, Guiguang Ding, Alberto L. Sangiovanni-Vincentelli, Kurt Keutzer
First, we generate an adapted domain to align the source and target domains on the pixel-level by improving CycleGAN with a multi-scale structured cycle-consistency loss.
1 code implementation • 17 Nov 2020 • Sicheng Zhao, Yang Xiao, Jiang Guo, Xiangyu Yue, Jufeng Yang, Ravi Krishna, Pengfei Xu, Kurt Keutzer
C-CycleGAN transfers source samples at instance-level to an intermediate domain that is closer to the target domain with sentiment semantics preserved and without losing discriminative features.
1 code implementation • 22 Aug 2020 • Sicheng Zhao, Yaxian Li, Xingxu Yao, Wei-Zhi Nie, Pengfei Xu, Jufeng Yang, Kurt Keutzer
In this paper, we study end-to-end matching between image and music based on emotions in the continuous valence-arousal (VA) space.
2 code implementations • 6 Jul 2020 • Yingjie Zhai, Deng-Ping Fan, Jufeng Yang, Ali Borji, Ling Shao, Junwei Han, Liang Wang
In particular, first, we propose to regroup the multi-level features into teacher and student features using a bifurcated backbone strategy (BBS).
Ranked #2 on
RGB-D Salient Object Detection
on RGBD135
no code implementations • 12 Feb 2020 • Sicheng Zhao, Yunsheng Ma, Yang Gu, Jufeng Yang, Tengfei Xing, Pengfei Xu, Runbo Hu, Hua Chai, Kurt Keutzer
Emotion recognition in user-generated videos plays an important role in human-centered computing.
Ranked #4 on
Video Emotion Recognition
on Ekman6
3 code implementations • 22 Aug 2019 • Jia-Xing Zhao, Jiang-Jiang Liu, Den-Ping Fan, Yang Cao, Jufeng Yang, Ming-Ming Cheng
In the second step, we integrate the local edge information and global location information to obtain the salient edge features.
no code implementations • 23 Jan 2019 • Jie Liang, Jufeng Yang, Ming-Ming Cheng, Paul L. Rosin, Liang Wang
In this paper we propose a unified framework to simultaneously discover the number of clusters and group the data points into them using subspace clustering.
no code implementations • 21 Dec 2018 • Xiaoxiao Sun, Liang Zheng, Yu-Kun Lai, Jufeng Yang
In this work, we first systematically study the built-in gap between the web and standard datasets, i. e. different data distributions between the two kinds of data.
no code implementations • ECCV 2018 • Jie Liang, Jufeng Yang, Hsin-Ying Lee, Kai Wang, Ming-Hsuan Yang
The recent years have witnessed significant growth in constructing robust generative models to capture informative distributions of natural data.
1 code implementation • CVPR 2018 • Jufeng Yang, Dongyu She, Yu-Kun Lai, Paul L. Rosin, Ming-Hsuan Yang
The second branch utilizes both the holistic and localized information by coupling the sentiment map with deep features for robust classification.
no code implementations • CVPR 2018 • Jufeng Yang, Xiaoxiao Sun, Jie Liang, Paul L. Rosin
Accordingly, we design six medical representations considering different criteria for the recognition of skin lesions, and construct a diagnosis system for clinical skin disease images.