Search Results for author: Fa-Ting Hong

Found 11 papers, 10 papers with code

Depth-Aware Generative Adversarial Network for Talking Head Video Generation

1 code implementation CVPR 2022 Fa-Ting Hong, Longhao Zhang, Li Shen, Dan Xu

In a more dense way, the depth is also utilized to learn 3D-aware cross-modal (i. e. appearance and depth) attention to guide the generation of motion fields for warping source image representations.

Generative Adversarial Network Talking Head Generation +1

DaGAN++: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

1 code implementation10 May 2023 Fa-Ting Hong, Li Shen, Dan Xu

In this work, firstly, we present a novel self-supervised method for learning dense 3D facial geometry (ie, depth) from face videos, without requiring camera parameters and 3D geometry annotations in training.

Generative Adversarial Network Keypoint Estimation +2

Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation

1 code implementation ICCV 2023 Fa-Ting Hong, Dan Xu

Talking head video generation aims to animate a human face in a still image with dynamic poses and expressions using motion information derived from a target-driving video, while maintaining the person's identity in the source image.

Talking Head Generation Video Generation

Cross-modal Consensus Network for Weakly Supervised Temporal Action Localization

2 code implementations27 Jul 2021 Fa-Ting Hong, Jia-Chang Feng, Dan Xu, Ying Shan, Wei-Shi Zheng

In this work, we argue that the features extracted from the pretrained extractor, e. g., I3D, are not the WS-TALtask-specific features, thus the feature re-calibration is needed for reducing the task-irrelevant information redundancy.

Weakly Supervised Action Localization Weakly-supervised Temporal Action Localization +1

Hybrid Dynamic-static Context-aware Attention Network for Action Assessment in Long Videos

2 code implementations13 Aug 2020 Ling-An Zeng, Fa-Ting Hong, Wei-Shi Zheng, Qi-Zhi Yu, Wei Zeng, Yao-Wei Wang, Jian-Huang Lai

However, most existing works focus only on video dynamic information (i. e., motion information) but ignore the specific postures that an athlete is performing in a video, which is important for action assessment in long videos.

Action Assessment Action Quality Assessment

Learning to Detect Important People in Unlabelled Images for Semi-supervised Important People Detection

1 code implementation CVPR 2020 Fa-Ting Hong, Wei-Hong Li, Wei-Shi Zheng

Important people detection is to automatically detect the individuals who play the most important roles in a social event image, which requires the designed model to understand a high-level pattern.

Object Recognition Pseudo Label

Learning to Learn Relation for Important People Detection in Still Images

1 code implementation CVPR 2019 Wei-Hong Li, Fa-Ting Hong, Wei-Shi Zheng

In this work, we propose a deep imPOrtance relatIon NeTwork (POINT) that combines both relation modeling and feature learning.

Relation Relation Network

MINI-Net: Multiple Instance Ranking Network for Video Highlight Detection

no code implementations ECCV 2020 Fa-Ting Hong, Xuanteng Huang, Wei-Hong Li, Wei-Shi Zheng

We address the weakly supervised video highlight detection problem for learning to detect segments that are more attractive in training videos given their video event label but without expensive supervision of manually annotating highlight segments.

Highlight Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.