Search Results for author: Tae-Yeon Kim

Found 1 papers, 0 papers with code

Poisoning Attacks and Defenses on Artificial Intelligence: A Survey

no code implementations21 Feb 2022 Miguel A. Ramirez, Song-Kyoo Kim, Hussam Al Hamadi, Ernesto Damiani, Young-Ji Byon, Tae-Yeon Kim, Chung-Suk Cho, Chan Yeob Yeun

This survey is conducted with a main intention of highlighting the most relevant information related to security vulnerabilities in the context of machine learning (ML) classifiers; more specifically, directed towards training procedures against data poisoning attacks, representing a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.

Data Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.