Search Results for author: Danilo Vasconcellos Vargas

Found 21 papers, 7 papers with code

Deep neural network loses attention to adversarial images

no code implementations10 Jun 2021 Shashank Kotyan, Danilo Vasconcellos Vargas

We also analyse how different adversarial samples distort the attention of the neural network compared to original samples.

Image Classification

Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation

no code implementations2 Sep 2020 Danilo Vasconcellos Vargas, Bingli Liao, Takahiro Kanzaki

Thus, $\varphi$DNNs reveal that input recreation has strong benefits for artificial neural networks similar to biological ones, shedding light into the importance of purposely corrupting the input as well as pioneering an area of perception models based on GANs and autoencoders for robust recognition in artificial intelligence.

Adversarial Robustness Super-Resolution

Continual General Chunking Problem and SyncMap

1 code implementation14 Jun 2020 Danilo Vasconcellos Vargas, Toshitake Asabuki

Here, we propose a continual generalization of the chunking problem (an unsupervised problem), encompassing fixed and probabilistic chunks, discovery of temporal and causal structures and their continual variations.

Chunking

Representation Quality Explain Adversarial Attacks

no code implementations25 Sep 2019 Danilo Vasconcellos Vargas, Shashank Kotyan, Moe Matsuki

The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary).

Evolving Robust Neural Architectures to Defend from Adversarial Attacks

1 code implementation27 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas

By creating a novel neural architecture search with options for dense layers to connect with convolution layers and vice-versa as well as the addition of concatenation layers in the search, we were able to evolve an architecture that is inherently accurate on adversarial samples.

Neural Architecture Search

Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences

1 code implementation15 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Moe Matsuki

A crucial step to understanding the rationale for this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features.

Association Zero-Shot Learning

Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks Are Necessary

1 code implementation14 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas

There exists a vast number of adversarial attacks and defences for machine learning algorithms of various types which makes assessing the robustness of algorithms a daunting task.

Adversarial Robustness Image Classification

Self Training Autonomous Driving Agent

no code implementations26 Apr 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Venkanna U

Intrinsically, driving is a Markov Decision Process which suits well the reinforcement learning paradigm.

Autonomous Driving reinforcement-learning +1

Batch Tournament Selection for Genetic Programming

no code implementations18 Apr 2019 Vinicius V. Melo, Danilo Vasconcellos Vargas, Wolfgang Banzhaf

Lexicase selection achieves very good solution quality by introducing ordered test cases.

Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis

no code implementations8 Feb 2019 Danilo Vasconcellos Vargas, Jiawei Su

Deep neural networks were shown to be vulnerable to single pixel modifications.

Universal Rules for Fooling Deep Neural Networks based Text Classification

no code implementations22 Jan 2019 Di Li, Danilo Vasconcellos Vargas, Sakurai Kouichi

Here, we go beyond attacks to investigate, for the first time, universal rules, i. e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one.

Classification General Classification +2

Spectrum-Diverse Neuroevolution with Unified Neural Models

1 code implementation6 Jan 2019 Danilo Vasconcellos Vargas, Junichi Murata

The combination of Spectrum Diversity with a unified neuron representation enables the algorithm to either surpass or equal NeuroEvolution of Augmenting Topologies (NEAT) on all of the five classes of problems tested.

Self Organizing Classifiers and Niched Fitness

no code implementations20 Nov 2018 Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata

In fact, the proposed algorithm possesses a dynamical population structure that self-organizes itself to better project the input space into a map.

Contingency Training

no code implementations20 Nov 2018 Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata

Experiments are conducted with the contingency training applied to neural networks over traditional datasets as well as datasets with additional irrelevant variables.

Novelty-organizing team of classifiers in noisy and dynamic environments

no code implementations19 Sep 2018 Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata

Moreover, NOTC is compared with NeuroEvolution of Augmenting Topologies (NEAT) in these problems, revealing a trade-off between the approaches.

Attacking Convolutional Neural Network using Differential Evolution

no code implementations19 Apr 2018 Jiawei Su, Danilo Vasconcellos Vargas, Kouichi Sakurai

The attack only requires modifying 5 pixels with 20. 44, 14. 76 and 22. 98 pixel values distortion.

Lightweight Classification of IoT Malware based on Image Recognition

no code implementations11 Feb 2018 Jiawei Su, Danilo Vasconcellos Vargas, Sanjiva Prasad, Daniele Sgandurra, Yaokai Feng, Kouichi Sakurai

The Internet of Things (IoT) is an extension of the traditional Internet, which allows a very large number of smart devices, such as home appliances, network cameras, sensors and controllers to connect to one another to share information and improve user experiences.

Classification General Classification

One pixel attack for fooling deep neural networks

5 code implementations24 Oct 2017 Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi

The results show that 67. 97% of the natural images in Kaggle CIFAR-10 test dataset and 16. 04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74. 03% and 22. 91% confidence on average.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.