Search Results for author: Jiahong Li

Found 7 papers, 3 papers with code

Real-time End-to-End Video Text Spotter with Contrastive Representation Learning

no code implementations18 Jul 2022 Wejia Wu, Zhuang Li, Jiahong Li, Chunhua Shen, Hong Zhou, Size Li, Zhongyuan Wang, Ping Luo

Our contributions are three-fold: 1) CoText simultaneously address the three tasks (e. g., text detection, tracking, recognition) in a real-time end-to-end trainable framework.

Contrastive Learning Representation Learning +1

Domain Generalization via Shuffled Style Assembly for Face Anti-Spoofing

1 code implementation CVPR 2022 Zhuo Wang, Zezheng Wang, Zitong Yu, Weihong Deng, Jiahong Li, Tingting Gao, Zhongyuan Wang

A novel Shuffled Style Assembly Network (SSAN) is proposed to extract and reassemble different content and style features for a stylized feature space.

Contrastive Learning Domain Generalization +1

Contrastive Learning of Semantic and Visual Representations for Text Tracking

no code implementations30 Dec 2021 Zhuang Li, Weijia Wu, Mike Zheng Shou, Jiahong Li, Size Li, Zhongyuan Wang, Hong Zhou

Semantic representation is of great benefit to the video text tracking(VTT) task that requires simultaneously classifying, detecting, and tracking texts in the video.

Contrastive Learning

A Bilingual, OpenWorld Video Text Dataset and End-to-end Video Text Spotter with Transformer

3 code implementations9 Dec 2021 Weijia Wu, Yuanqiang Cai, Debing Zhang, Sibo Wang, Zhuang Li, Jiahong Li, Yejun Tang, Hong Zhou

Most existing video text spotting benchmarks focus on evaluating a single language and scenario with limited data.

Text Spotting

Frequency-aware Discriminative Feature Learning Supervised by Single-Center Loss for Face Forgery Detection

no code implementations CVPR 2021 Jiaming Li, Hongtao Xie, Jiahong Li, Zhongyuan Wang, Yongdong Zhang

Face forgery detection is raising ever-increasing interest in computer vision since facial manipulation technologies cause serious worries.

Cannot find the paper you are looking for? You can Submit a new open access paper.