Paper

Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation

This paper proposes a new algorithm for controlling classification results by generating a small additive perturbation without changing the classifier network. Our work is inspired by existing works generating adversarial perturbation that worsens classification performance. In contrast to the existing methods, our work aims to generate perturbations that can enhance overall classification performance. To solve this performance enhancement problem, we newly propose a perturbation generation network (PGN) influenced by the adversarial learning strategy. In our problem, the information in a large external dataset is summarized by a small additive perturbation, which helps to improve the performance of the classifier trained with the target dataset. In addition to this performance enhancement problem, we show that the proposed PGN can be adopted to solve the classical adversarial problem without utilizing the information on the target classifier. The mentioned characteristics of our method are verified through extensive experiments on publicly available visual datasets.

Results in Papers With Code
(↓ scroll down to see all results)