Search Results for author: Arnout Van Messem

Found 12 papers, 5 papers with code

Leveraging Human-Machine Interactions for Computer Vision Dataset Quality Enhancement

no code implementations31 Jan 2024 Esla Timothy Anzaku, Hyesoo Hong, Jin-Woo Park, Wonjun Yang, KangMin Kim, JongBum Won, Deshika Vinoshani Kumari Herath, Arnout Van Messem, Wesley De Neve

In this paper, we introduce a lightweight, user-friendly, and scalable framework that synergizes human and machine intelligence for efficient dataset validation and quality enhancement.

Multi-class Classification

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

1 code implementation23 May 2023 Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver

In this survey, we review a plethora of research efforts conducted on image-oriented SSL, providing a historic view and paying attention to best practices as well as useful software packages.

Contrastive Learning Self-Supervised Learning

A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets

no code implementations5 Sep 2022 Esla Timothy Anzaku, Haohan Wang, Arnout Van Messem, Wesley De Neve

Deep Neural Network (DNN) models are increasingly evaluated using new replication test datasets, which have been carefully created to be similar to older and popular benchmark datasets.

Exact Feature Collisions in Neural Networks

no code implementations31 May 2022 Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem

Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

1 code implementation NeurIPS Workshop ImageNet_PPF 2021 Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve

We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.


Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

1 code implementation14 Jun 2021 Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem

Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found.


Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

1 code implementation7 Jul 2020 Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem

Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.

Perturbation Analysis of Gradient-based Adversarial Attacks

no code implementations2 Jun 2020 Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem

Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

1 code implementation30 Jul 2019 Utku Ozbulak, Arnout Van Messem, Wesley De Neve

Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models.

Image Segmentation Lesion Segmentation +4

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

no code implementations21 Nov 2018 Utku Ozbulak, Wesley De Neve, Arnout Van Messem

Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase.

Cannot find the paper you are looking for? You can Submit a new open access paper.