no code implementations • 1 Sep 2023 • Woowon Jang, Shiwoo Koak, Jiwon Im, Utku Ozbulak, Joris Vankerschaver
BRCA genes, comprising BRCA1 and BRCA2 play indispensable roles in preserving genomic stability and facilitating DNA repair mechanisms.
1 code implementation • 23 May 2023 • Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver
In this survey, we review a plethora of research efforts conducted on image-oriented SSL, providing a historic view and paying attention to best practices as well as useful software packages.
no code implementations • 12 Dec 2022 • Utku Ozbulak, Solha Kang, Jasper Zuallaert, Stephen Depuydt, Joris Vankerschaver
Even though deep neural networks (DNNs) achieve state-of-the-art results for a number of problems involving genomic data, getting DNNs to explain their decision-making process has been a major challenge due to their black-box nature.
no code implementations • 31 May 2022 • Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem
Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.
1 code implementation • NeurIPS Workshop ImageNet_PPF 2021 • Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve
We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.
1 code implementation • 14 Jun 2021 • Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem
Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found.
no code implementations • 26 Jan 2021 • Utku Ozbulak, Baptist Vandersmissen, Azarakhsh Jalalvand, Ivo Couckuyt, Arnout Van Messem, Wesley De Neve
Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks.
1 code implementation • 7 Jul 2020 • Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem
Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.
no code implementations • 2 Jun 2020 • Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem
Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.
no code implementations • 30 Jul 2019 • Utku Ozbulak, Arnout Van Messem, Wesley De Neve
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning.
1 code implementation • 30 Jul 2019 • Utku Ozbulak, Arnout Van Messem, Wesley De Neve
Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models.
no code implementations • 21 Nov 2018 • Utku Ozbulak, Wesley De Neve, Arnout Van Messem
Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase.