Search Results for author: Ke Gao

Found 8 papers, 2 papers with code

GANHead: Towards Generative Animatable Neural Head Avatars

no code implementations CVPR 2023 Sijing Wu, Yichao Yan, Yunhao Li, Yuhao Cheng, Wenhan Zhu, Ke Gao, Xiaobo Li, Guangtao Zhai

To bring digital avatars into people's lives, it is highly demanded to efficiently generate complete, realistic, and animatable head avatars.

Scenario-Adaptive and Self-Supervised Model for Multi-Scenario Personalized Recommendation

no code implementations24 Aug 2022 Yuanliang Zhang, XiaoFeng Wang, Jinxin Hu, Ke Gao, Chenyi Lei, Fei Fang

we summarize three practical challenges which are not well solved for multi-scenario modeling: (1) Lacking of fine-grained and decoupled information transfer controls among multiple scenarios.

Contrastive Learning Disentanglement +1

Multi-Granularity Network with Modal Attention for Dense Affective Understanding

no code implementations18 Jun 2021 Baoming Yan, Lin Wang, Ke Gao, Bo Gao, Xiao Liu, Chao Ban, Jiang Yang, Xiaobo Li

Video affective understanding, which aims to predict the evoked expressions by the video content, is desired for video creation and recommendation.

Not All Words are Equal: Video-specific Information Loss for Video Captioning

no code implementations1 Jan 2019 Jiarong Dong, Ke Gao, Xiaokai Chen, Junbo Guo, Juan Cao, Yongdong Zhang

To address this issue, we propose a novel learning strategy called Information Loss, which focuses on the relationship between the video-specific visual content and corresponding representative words.

Video Captioning

DenseImage Network: Video Spatial-Temporal Evolution Encoding and Understanding

no code implementations19 May 2018 Xiaokai Chen, Ke Gao

1) A novel compact representation of video which distills its significant spatial-temporal evolution into a matrix called DenseImage, primed for efficient video encoding.

Action Recognition In Videos Gesture Recognition +1

APE-GAN: Adversarial Perturbation Elimination with GAN

3 code implementations18 Jul 2017 Shiwei Shen, Guoqing Jin, Ke Gao, Yongdong Zhang

Although neural networks could achieve state-of-the-art performance while recongnizing images, they often suffer a tremendous defeat from adversarial examples--inputs generated by utilizing imperceptible but intentional perturbation to clean samples from the datasets.

Task-Driven Dynamic Fusion: Reducing Ambiguity in Video Description

no code implementations CVPR 2017 Xishan Zhang, Ke Gao, Yongdong Zhang, Dongming Zhang, Jintao Li, Qi Tian

This paper contributes to: 1)The first in-depth study of the weakness inherent in data-driven static fusion methods for video captioning.

Video Captioning Video Description

Cannot find the paper you are looking for? You can Submit a new open access paper.