Search Results for author: Utku Ozbulak

Found 17 papers, 5 papers with code

Evaluating Visual Explanations of Attention Maps for Transformer-based Medical Imaging

no code implementations12 Mar 2025 Minjae Chung, Jong Bum Won, Ganghyun Kim, Yujin Kim, Utku Ozbulak

Although Vision Transformers (ViTs) have recently demonstrated superior performance in medical imaging problems, they face explainability issues similar to previous architectures such as convolutional neural networks.

Decision Making Self-Supervised Learning

Exploring Patient Data Requirements in Training Effective AI Models for MRI-based Breast Cancer Classification

no code implementations22 Feb 2025 Solha Kang, Wesley De Neve, Francois Rameau, Utku Ozbulak

Through large-scale experiments on various patient sizes in the training set, we show that medical institutions do not need a decade's worth of MRI images to train an AI model that performs competitively with the state-of-the-art, provided the model leverages foundation models.

Breast Cancer Detection Cancer Classification

Color Flow Imaging Microscopy Improves Identification of Stress Sources of Protein Aggregates in Biopharmaceuticals

no code implementations26 Jan 2025 Michaela Cohrs, Shiwoo Koak, Yejin Lee, Yu Jin Sung, Wesley De Neve, Hristo L. Svilenov, Utku Ozbulak

Using both supervised and self-supervised convolutional neural networks, as well as vision transformers in large-scale experiments, we demonstrate that deep learning with color FIM images consistently outperforms monochrome images, thus highlighting the potential of color FIM in stress source classification compared to its monochrome counterparts.

Self-supervised Benchmark Lottery on ImageNet: Do Marginal Improvements Translate to Improvements on Similar Datasets?

no code implementations26 Jan 2025 Utku Ozbulak, Esla Timothy Anzaku, Solha Kang, Wesley De Neve, Joris Vankerschaver

To avoid the "benchmark lottery" on ImageNet and to ensure a fair benchmarking process, we investigate the usage of a unified metric that takes into account the performance of models on other ImageNet variant datasets.

Benchmarking Self-Supervised Learning

Identifying Critical Tokens for Accurate Predictions in Transformer-based Medical Imaging Models

no code implementations26 Jan 2025 Solha Kang, Joris Vankerschaver, Utku Ozbulak

With the advancements in self-supervised learning (SSL), transformer-based computer vision models have recently demonstrated superior results compared to convolutional neural networks (CNNs) and are poised to dominate the field of artificial intelligence (AI)-based medical imaging in the upcoming years.

Decision Making Self-Supervised Learning

BRCA Gene Mutations in dbSNP: A Visual Exploration of Genetic Variants

no code implementations1 Sep 2023 Woowon Jang, Shiwoo Koak, Jiwon Im, Utku Ozbulak, Joris Vankerschaver

BRCA genes, comprising BRCA1 and BRCA2 play indispensable roles in preserving genomic stability and facilitating DNA repair mechanisms.

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

1 code implementation23 May 2023 Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver

In this survey, we review a plethora of research efforts conducted on image-oriented SSL, providing a historic view and paying attention to best practices as well as useful software packages.

Contrastive Learning Self-Supervised Learning

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

no code implementations12 Dec 2022 Utku Ozbulak, Solha Kang, Jasper Zuallaert, Stephen Depuydt, Joris Vankerschaver

Even though deep neural networks (DNNs) achieve state-of-the-art results for a number of problems involving genomic data, getting DNNs to explain their decision-making process has been a major challenge due to their black-box nature.

Decision Making Translation

Exact Feature Collisions in Neural Networks

no code implementations31 May 2022 Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem

Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

1 code implementation NeurIPS Workshop ImageNet_PPF 2021 Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve

We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.

Benchmarking

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

1 code implementation14 Jun 2021 Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem

Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found.

Benchmarking

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

1 code implementation7 Jul 2020 Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem

Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.

model

Perturbation Analysis of Gradient-based Adversarial Attacks

no code implementations2 Jun 2020 Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem

Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

1 code implementation30 Jul 2019 Utku Ozbulak, Arnout Van Messem, Wesley De Neve

Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models.

Image Segmentation Lesion Segmentation +5

How the Softmax Output is Misleading for Evaluating the Strength of Adversarial Examples

no code implementations21 Nov 2018 Utku Ozbulak, Wesley De Neve, Arnout Van Messem

Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase.

Cannot find the paper you are looking for? You can Submit a new open access paper.