Search Results for author: Mingda Zhang

Found 16 papers, 4 papers with code

BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning

no code implementations26 Jan 2024 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Mingli Zhu, Ruotong Wang, Li Liu, Chao Shen

We hope that our efforts could build a solid foundation of backdoor learning to facilitate researchers to investigate existing algorithms, develop more innovative algorithms, and explore the intrinsic mechanism of backdoor learning.

Backdoor Attack

Defenses in Adversarial Machine Learning: A Survey

no code implementations13 Dec 2023 Baoyuan Wu, Shaokui Wei, Mingli Zhu, Meixi Zheng, Zihao Zhu, Mingda Zhang, Hongrui Chen, Danni Yuan, Li Liu, Qingshan Liu

Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce inconsistent and incomprehensible predictions with humans at some particular cases.

Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy

no code implementations14 Jul 2023 Zihao Zhu, Mingda Zhang, Shaokui Wei, Li Shen, Yanbo Fan, Baoyuan Wu

To further integrate it with normal training process, we then propose a learnable poisoning sample selection strategy to learn the mask together with the model parameters through a min-max optimization. Specifically, the outer loop aims to achieve the backdoor attack goal by minimizing the loss based on the selected samples, while the inner loop selects hard poisoning samples that impede this goal by maximizing the loss.

Backdoor Attack Data Poisoning

Train-Once-for-All Personalization

no code implementations CVPR 2023 Hong-You Chen, Yandong Li, Yin Cui, Mingda Zhang, Wei-Lun Chao, Li Zhang

We study the problem of how to train a "personalization-friendly" model such that given only the task descriptions, the model can be adapted to different end-users' needs, e. g., for accurately classifying different subsets of objects.

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

1 code implementation25 Jun 2022 Baoyuan Wu, Hongrui Chen, Mingda Zhang, Zihao Zhu, Shaokui Wei, Danni Yuan, Chao Shen

However, we find that the evaluations of new methods are often unthorough to verify their claims and accurate performance, mainly due to the rapid development, diverse settings, and the difficulties of implementation and reproducibility.

Backdoor Attack

Domain-robust VQA with diverse datasets and methods but no target labels

no code implementations CVPR 2021 Mingda Zhang, Tristan Maidment, Ahmad Diab, Adriana Kovashka, Rebecca Hwa

The observation that computer vision methods overfit to dataset specifics has inspired diverse attempts to make object recognition models robust to domain shifts.

Object Recognition Question Answering +2

Cap2Det: Learning to Amplify Weak Caption Supervision for Object Detection

1 code implementation ICCV 2019 Keren Ye, Mingda Zhang, Adriana Kovashka, Wei Li, Danfeng Qin, Jesse Berent

Learning to localize and name object instances is a fundamental problem in vision, but state-of-the-art approaches rely on expensive bounding box supervision.

Object object-detection +1

Learning to discover and localize visual objects with open vocabulary

no code implementations25 Nov 2018 Keren Ye, Mingda Zhang, Wei Li, Danfeng Qin, Adriana Kovashka, Jesse Berent

To alleviate the cost of obtaining accurate bounding boxes for training today's state-of-the-art object detection models, recent weakly supervised detection work has proposed techniques to learn from image-level labels.

Object object-detection +1

Automatic Understanding of Image and Video Advertisements

no code implementations CVPR 2017 Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, Adriana Kovashka

There is more to images than their objective physical content: for example, advertisements are created to persuade a viewer to take a certain action.

Cannot find the paper you are looking for? You can Submit a new open access paper.