Search Results for author: Jonathan Peck

Found 5 papers, 1 papers with code

Distilling Deep RL Models Into Interpretable Neuro-Fuzzy Systems

no code implementations7 Sep 2022 Arne Gevaert, Jonathan Peck, Yvan Saeys

In this work, we present an algorithm to distill the policy from a deep Q-network into a compact neuro-fuzzy controller.

OpenAI Gym reinforcement-learning +1

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

1 code implementation7 Jul 2020 Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem

Regional adversarial attacks often rely on complicated methods for generating adversarial perturbations, making it hard to compare their efficacy against well-known attacks.

Inline Detection of DGA Domains Using Side Information

no code implementations12 Mar 2020 Raaghavi Sivaguru, Jonathan Peck, Femi Olumofin, Anderson Nascimento, Martine De Cock

We found that the DGA classifiers that rely on both the domain name and side information have high performance and are more robust against adversaries.

Adversarial Attack

CharBot: A Simple and Effective Method for Evading DGA Classifiers

no code implementations3 May 2019 Jonathan Peck, Claire Nie, Raaghavi Sivaguru, Charles Grumer, Femi Olumofin, Bin Yu, Anderson Nascimento, Martine De Cock

In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM. MI (a deep learning approach).

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.