no code implementations • 31 Jan 2024 • Esla Timothy Anzaku, Hyesoo Hong, Jin-Woo Park, Wonjun Yang, KangMin Kim, JongBum Won, Deshika Vinoshani Kumari Herath, Arnout Van Messem, Wesley De Neve
In this paper, we introduce a lightweight, user-friendly, and scalable framework that synergizes human and machine intelligence for efficient dataset validation and quality enhancement.
no code implementations • 18 Oct 2023 • Khoa Tuan Nguyen, Francesca Tozzi, Nikdokht Rashidian, Wouter Willaert, Joris Vankerschaver, Wesley De Neve
Given that a conventional laparoscope only provides a two-dimensional (2-D) view, the detection and diagnosis of medical ailments can be challenging.
1 code implementation • 23 May 2023 • Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver
In this survey, we review a plethora of research efforts conducted on image-oriented SSL, providing a historic view and paying attention to best practices as well as useful software packages.
no code implementations • 5 Sep 2022 • Esla Timothy Anzaku, Haohan Wang, Arnout Van Messem, Wesley De Neve
Deep Neural Network (DNN) models are increasingly evaluated using new replication test datasets, which have been carefully created to be similar to older and popular benchmark datasets.
no code implementations • 31 May 2022 • Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem
Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples.
1 code implementation • NeurIPS Workshop ImageNet_PPF 2021 • Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve
We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.
1 code implementation • 14 Jun 2021 • Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem
Although the adoption rate of deep neural networks (DNNs) has tremendously increased in recent years, a solution for their vulnerability against adversarial examples has not yet been found.
no code implementations • 26 Jan 2021 • Utku Ozbulak, Baptist Vandersmissen, Azarakhsh Jalalvand, Ivo Couckuyt, Arnout Van Messem, Wesley De Neve
Another concern that is often cited when designing smart home applications is the resilience of these applications against cyberattacks.
1 code implementation • 7 Jul 2020 • Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem
Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.
no code implementations • 2 Jun 2020 • Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem
Our experiments reveal that the Iterative Fast Gradient Sign attack, which is thought to be fast for generating adversarial examples, is the worst attack in terms of the number of iterations required to create adversarial examples in the setting of equal perturbation.
1 code implementation • 30 Jul 2019 • Utku Ozbulak, Arnout Van Messem, Wesley De Neve
Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models.
no code implementations • 30 Jul 2019 • Utku Ozbulak, Arnout Van Messem, Wesley De Neve
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning.
no code implementations • 6 Dec 2018 • Mijung Kim, Olivier Janssens, Ho-min Park, Jasper Zuallaert, Sofie Van Hoecke, Wesley De Neve
Glaucoma is a major eye disease, leading to vision loss in the absence of proper medical treatment.
no code implementations • 21 Nov 2018 • Utku Ozbulak, Wesley De Neve, Arnout Van Messem
Nowadays, the output of the softmax function is also commonly used to assess the strength of adversarial examples: malicious data points designed to fail machine learning models during the testing phase.
1 code implementation • EMNLP 2018 • Fréderic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, Thomas Demeester
In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.
no code implementations • 27 Nov 2017 • Jasper Zuallaert, Mijung Kim, Yvan Saeys, Wesley De Neve
Thanks to rapidly evolving sequencing techniques, the amount of genomic data at our disposal is growing increasingly large.
2 code implementations • 25 Jul 2017 • Fréderic Godin, Jonas Degrave, Joni Dambre, Wesley De Neve
A DReLU, which comes with an unbounded positive and negative image, can be used as a drop-in replacement for a tanh activation function in the recurrent step of Quasi-Recurrent Neural Networks (QRNNs) (Bradbury et al. (2017)).
no code implementations • WS 2017 • Fréderic Godin, Joni Dambre, Wesley De Neve
In this paper, we introduce the novel concept of densely connected layers into recurrent neural networks.