Search Results for author: Jiachen Zhou

Found 4 papers, 3 papers with code

MEA-Defender: A Robust Watermark against Model Extraction Attack

1 code implementation26 Jan 2024 Peizhuo Lv, Hualong Ma, Kai Chen, Jiachen Zhou, Shengzhi Zhang, Ruigang Liang, Shenchen Zhu, Pan Li, Yingjun Zhang

To protect the Intellectual Property (IP) of the original owners over such DNN models, backdoor-based watermarks have been extensively studied.

Model extraction Self-Supervised Learning

DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models

1 code implementation18 Dec 2023 Jiachen Zhou, Peizhuo Lv, Yibing Lan, Guozhu Meng, Kai Chen, Hualong Ma

Dataset sanitization is a widely adopted proactive defense against poisoning-based backdoor attacks, aimed at filtering out and removing poisoned samples from training datasets.

aUToLights: A Robust Multi-Camera Traffic Light Detection and Tracking System

no code implementations15 May 2023 Sean Wu, Nicole Amenta, Jiachen Zhou, Sandro Papais, Jonathan Kelly

Following four successful years in the SAE AutoDrive Challenge Series I, the University of Toronto is participating in the Series II competition to develop a Level 4 autonomous passenger vehicle capable of handling various urban driving scenarios by 2025.

Autonomous Vehicles object-detection +1

DBIA: Data-free Backdoor Injection Attack against Transformer Networks

1 code implementation22 Nov 2021 Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen, Shengzhi Zhang, Yunfei Yang

In this paper, we propose DBIA, a novel data-free backdoor attack against the CV-oriented transformer networks, leveraging the inherent attention mechanism of transformers to generate triggers and injecting the backdoor using the poisoned surrogate dataset.

Backdoor Attack Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.