no code implementations • 31 Jan 2024 • Esla Timothy Anzaku, Hyesoo Hong, Jin-Woo Park, Wonjun Yang, KangMin Kim, JongBum Won, Deshika Vinoshani Kumari Herath, Arnout Van Messem, Wesley De Neve
In this paper, we introduce a lightweight, user-friendly, and scalable framework that synergizes human and machine intelligence for efficient dataset validation and quality enhancement.
1 code implementation • 23 May 2023 • Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver
In this survey, we review a plethora of research efforts conducted on image-oriented SSL, providing a historic view and paying attention to best practices as well as useful software packages.
no code implementations • 5 Sep 2022 • Esla Timothy Anzaku, Haohan Wang, Arnout Van Messem, Wesley De Neve
Deep Neural Network (DNN) models are increasingly evaluated using new replication test datasets, which have been carefully created to be similar to older and popular benchmark datasets.
no code implementations • 31 May 2022 • Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem
Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.
1 code implementation • NeurIPS Workshop ImageNet_PPF 2021 • Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve
We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.
1 code implementation • 14 Jun 2021 • Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem
Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found.
no code implementations • 26 Jan 2021 • Utku Ozbulak, Baptist Vandersmissen, Azarakhsh Jalalvand, Ivo Couckuyt, Arnout Van Messem, Wesley De Neve
Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks.
1 code implementation • 7 Jul 2020 • Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem
Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.
no code implementations • 2 Jun 2020 • Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem
Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.
no code implementations • 30 Jul 2019 • Utku Ozbulak, Arnout Van Messem, Wesley De Neve
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning.
1 code implementation • 30 Jul 2019 • Utku Ozbulak, Arnout Van Messem, Wesley De Neve
Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models.
no code implementations • 21 Nov 2018 • Utku Ozbulak, Wesley De Neve, Arnout Van Messem
Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase.