Search Results for author: Utku Ozbulak

Found 12 papers, 4 papers with code

BRCA Gene Mutations in dbSNP: A Visual Exploration of Genetic Variants

no code implementations1 Sep 2023 Woowon Jang, Shiwoo Koak, Jiwon Im, Utku Ozbulak, Joris Vankerschaver

BRCA genes, comprising BRCA1 and BRCA2 play indispensable roles in preserving genomic stability and facilitating DNA repair mechanisms.

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

no code implementations23 May 2023 Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver

In this survey, we review a plethora of research efforts conducted on image-oriented SSL, providing a historic view and paying attention to best practices as well as useful software packages.

Contrastive Learning Self-Supervised Learning

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

no code implementations12 Dec 2022 Utku Ozbulak, Solha Kang, Jasper Zuallaert, Stephen Depuydt, Joris Vankerschaver

Even though deep neural networks (DNNs) achieve state-of-the-art results for a number of problems involving genomic data, getting DNNs to explain their decision-making process has been a major challenge due to their black-box nature.

Decision Making Translation

Exact Feature Collisions in Neural Networks

no code implementations31 May 2022 Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem

Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

1 code implementation NeurIPS Workshop ImageNet_PPF 2021 Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve

We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.

Benchmarking

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

1 code implementation14 Jun 2021 Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem

Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found.

Benchmarking

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

1 code implementation7 Jul 2020 Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem

Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.

Perturbation Analysis of Gradient-based Adversarial Attacks

no code implementations2 Jun 2020 Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem

Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

1 code implementation30 Jul 2019 Utku Ozbulak, Arnout Van Messem, Wesley De Neve

Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models.

Image Segmentation Lesion Segmentation +4

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

no code implementations21 Nov 2018 Utku Ozbulak, Wesley De Neve, Arnout Van Messem

Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase.

Cannot find the paper you are looking for? You can Submit a new open access paper.