Search Results for author: Yabin Wang

Found 8 papers, 6 papers with code

Linguistic Profiling of Deepfakes: An Open Database for Next-Generation Deepfake Detection

1 code implementation4 Jan 2024 Yabin Wang, Zhiwu Huang, Zhiheng Ma, Xiaopeng Hong

The two distinguished features enable DFLIP-3K to develop a benchmark that promotes progress in linguistic profiling of deepfakes, which includes three sub-tasks namely deepfake detection, model identification, and prompt prediction.

DeepFake Detection Face Swapping

Remind of the Past: Incremental Learning with Analogical Prompts

1 code implementation24 Mar 2023 Zhiheng Ma, Xiaopeng Hong, Beinan Liu, Yabin Wang, Pinyue Guo, Huiyun Li

It mimics the feature distribution of the target old class on the old model using only samples of new classes.

Incremental Learning

Towards Practical Multi-Robot Hybrid Tasks Allocation for Autonomous Cleaning

1 code implementation12 Mar 2023 Yabin Wang, Xiaopeng Hong, Zhiheng Ma, Tiedong Ma, Baoxing Qin, Zhou Su

Task allocation plays a vital role in multi-robot autonomous cleaning systems, where multiple robots work together to clean a large area.

Benchmarking Deepart Detection

no code implementations28 Feb 2023 Yabin Wang, Zhiwu Huang, Xiaopeng Hong

To address potentially appeared ethics questions, this paper establishes a deepart detection database (DDDB) that consists of a set of high-quality conventional art images (conarts) and five sets of deepart images generated by five state-of-the-art deepfake models.

Benchmarking DeepFake Detection +2

Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference

1 code implementation29 Nov 2022 Yabin Wang, Zhiheng Ma, Zhiwu Huang, YaoWei Wang, Zhou Su, Xiaopeng Hong

To avoid obvious stage learning bottlenecks, we propose a brand-new stage-isolation based incremental learning framework, which leverages a series of stage-isolated classifiers to perform the learning task of each stage without the interference of others.

Continual Learning Incremental Learning

S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning

2 code implementations26 Jul 2022 Yabin Wang, Zhiwu Huang, Xiaopeng Hong

In this paper, we propose one simple paradigm (named as S-Prompting) and two concrete approaches to highly reduce the forgetting degree in one of the most typical continual learning scenarios, i. e., domain increment learning (DIL).

Continual Learning Incremental Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.