1 code implementation • ECCV 2020 • Deng-Ping Fan, Yingjie Zhai, Ali Borji, Jufeng Yang, Ling Shao
In particular, we 1) propose a bifurcated backbone strategy (BBS) to split the multi-level features into teacher and student features, and 2) utilize a depth-enhanced module (DEM) to excavate informative parts of depth cues from the channel and spatial views.
no code implementations • 25 Aug 2024 • Xin Zhang, Teodor Boyadzhiev, Jinglei Shi, Jufeng Yang
In this paper, we leverage image complexity as a prior for refining segmentation features to achieve accurate real-time semantic segmentation.
1 code implementation • CVPR 2024 • Pancheng Zhao, Peng Xu, Pengda Qin, Deng-Ping Fan, Zhicheng Zhang, Guoli Jia, BoWen Zhou, Jufeng Yang
Camouflaged vision perception is an important vision task with numerous practical applications.
no code implementations • 30 Mar 2024 • Duosheng Chen, Shihao Zhou, Jinshan Pan, Jinglei Shi, Lishen Qu, Jufeng Yang
This attention module contains radial strip windows to reweight image features in the polar coordinate, which preserves more useful information in rotation and translation motion together for better recovering the sharp images.
no code implementations • CVPR 2024 • Zhicheng Zhang, Pancheng Zhao, Eunil Park, Jufeng Yang
Inspired by psychology research and empirical theory we verify that the degree of emotion may vary in different segments of the video thus introducing the sentiment complementary and emotion intrinsic among temporal segments.
Multimodal Emotion Recognition Multimodal Sentiment Analysis +2
1 code implementation • CVPR 2024 • Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, Jufeng Yang
Meanwhile FRFN employs an enhance-and-ease scheme to eliminate feature redundancy in channels enhancing the restoration of clear latent images.
no code implementations • CVPR 2024 • Zhicheng Zhang, Junyao Hu, Wentao Cheng, Danda Paudel, Jufeng Yang
Video prediction is a challenging task due to its nature of uncertainty especially for forecasting a long period.
1 code implementation • 14 Dec 2023 • Hao Shao, Quansheng Zeng, Qibin Hou, Jufeng Yang
To process the significant variations of lesion regions or organs in individual sizes and shapes, we also use multiple convolutions of strip-shape kernels with different kernel sizes in each axial attention path to improve the efficiency of the proposed MCA in encoding spatial information.
1 code implementation • CVPR 2023 • Zhicheng Zhang, Lijuan Wang, Jufeng Yang
Automatically predicting the emotions of user-generated videos (UGVs) receives increasing interest recently.
Ranked #3 on Video Emotion Recognition on Ekman6
1 code implementation • CVPR 2023 • Changsong Wen, Guoli Jia, Jufeng Yang
The distribution is generated from the latest data stored in the memory bank, which can adaptively model the difference of semantic similarity between sarcastic and non-sarcastic data.
1 code implementation • CVPR 2023 • Xin Liu, Jufeng Yang
In the end, we develop a Neighbor Consistency Mining Network (NCMNet) for accurately recovering camera poses and identifying inliers.
1 code implementation • CVPR 2023 • Tinglei Feng, Jiaxuan Liu, Jufeng Yang
Pre-training of deep convolutional neural networks (DCNNs) plays a crucial role in the field of visual sentiment analysis (VSA).
no code implementations • ICCV 2023 • Zhicheng Zhang, Shengzhe Liu, Jufeng Yang
Specifically, we present a dual-branch network to track the visible part of planar objects, including vertexes and mask.
no code implementations • ICCV 2023 • Changsong Wen, Xin Zhang, Xingxu Yao, Jufeng Yang
Therefore, we propose a new paradigm, termed ordinal label distribution learning (OLDL).
no code implementations • 18 Aug 2021 • Sicheng Zhao, Guoli Jia, Jufeng Yang, Guiguang Ding, Kurt Keutzer
In this tutorial, we discuss several key aspects of multi-modal emotion recognition (MER).
no code implementations • ICCV 2021 • Xingxu Yao, Sicheng Zhao, Pengfei Xu, Jufeng Yang
To reduce annotation labor associated with object detection, an increasing number of studies focus on transferring the learned knowledge from a labeled source domain to another unlabeled target domain.
no code implementations • 30 Jun 2021 • Sicheng Zhao, Xingxu Yao, Jufeng Yang, Guoli Jia, Guiguang Ding, Tat-Seng Chua, Björn W. Schuller, Kurt Keutzer
Images can convey rich semantics and induce various emotions in viewers.
no code implementations • 19 Mar 2021 • Sicheng Zhao, Quanwei Huang, YouBao Tang, Xingxu Yao, Jufeng Yang, Guiguang Ding, Björn W. Schuller
Recently, extensive research efforts have been dedicated to understanding the emotions of images.
no code implementations • 25 Nov 2020 • Sicheng Zhao, Xuanbai Chen, Xiangyu Yue, Chuang Lin, Pengfei Xu, Ravi Krishna, Jufeng Yang, Guiguang Ding, Alberto L. Sangiovanni-Vincentelli, Kurt Keutzer
First, we generate an adapted domain to align the source and target domains on the pixel-level by improving CycleGAN with a multi-scale structured cycle-consistency loss.
1 code implementation • 17 Nov 2020 • Sicheng Zhao, Yang Xiao, Jiang Guo, Xiangyu Yue, Jufeng Yang, Ravi Krishna, Pengfei Xu, Kurt Keutzer
C-CycleGAN transfers source samples at instance-level to an intermediate domain that is closer to the target domain with sentiment semantics preserved and without losing discriminative features.
1 code implementation • 22 Aug 2020 • Sicheng Zhao, Yaxian Li, Xingxu Yao, Wei-Zhi Nie, Pengfei Xu, Jufeng Yang, Kurt Keutzer
In this paper, we study end-to-end matching between image and music based on emotions in the continuous valence-arousal (VA) space.
2 code implementations • 6 Jul 2020 • Yingjie Zhai, Deng-Ping Fan, Jufeng Yang, Ali Borji, Ling Shao, Junwei Han, Liang Wang
In particular, first, we propose to regroup the multi-level features into teacher and student features using a bifurcated backbone strategy (BBS).
Ranked #2 on RGB-D Salient Object Detection on RGBD135
no code implementations • 12 Feb 2020 • Sicheng Zhao, Yunsheng Ma, Yang Gu, Jufeng Yang, Tengfei Xing, Pengfei Xu, Runbo Hu, Hua Chai, Kurt Keutzer
Emotion recognition in user-generated videos plays an important role in human-centered computing.
Ranked #4 on Video Emotion Recognition on Ekman6
3 code implementations • 22 Aug 2019 • Jia-Xing Zhao, Jiang-Jiang Liu, Den-Ping Fan, Yang Cao, Jufeng Yang, Ming-Ming Cheng
In the second step, we integrate the local edge information and global location information to obtain the salient edge features.
no code implementations • 23 Jan 2019 • Jie Liang, Jufeng Yang, Ming-Ming Cheng, Paul L. Rosin, Liang Wang
In this paper we propose a unified framework to simultaneously discover the number of clusters and group the data points into them using subspace clustering.
no code implementations • 21 Dec 2018 • Xiaoxiao Sun, Liang Zheng, Yu-Kun Lai, Jufeng Yang
In this work, we first systematically study the built-in gap between the web and standard datasets, i. e. different data distributions between the two kinds of data.
no code implementations • ECCV 2018 • Jie Liang, Jufeng Yang, Hsin-Ying Lee, Kai Wang, Ming-Hsuan Yang
The recent years have witnessed significant growth in constructing robust generative models to capture informative distributions of natural data.
no code implementations • CVPR 2018 • Jufeng Yang, Xiaoxiao Sun, Jie Liang, Paul L. Rosin
Accordingly, we design six medical representations considering different criteria for the recognition of skin lesions, and construct a diagnosis system for clinical skin disease images.
1 code implementation • CVPR 2018 • Jufeng Yang, Dongyu She, Yu-Kun Lai, Paul L. Rosin, Ming-Hsuan Yang
The second branch utilizes both the holistic and localized information by coupling the sentiment map with deep features for robust classification.