Search Results for author: Danilo Vasconcellos Vargas

Found 34 papers, 10 papers with code

Adapting to Covariate Shift in Real-time by Encoding Trees with Motion Equations

no code implementations8 Apr 2024 Tham Yik Foong, Heng Zhang, Mao Po Yuan, Danilo Vasconcellos Vargas

In this paper, we demonstrated how a neural network integrated with Xenovert achieved better results in 4 out of 5 shifted datasets, saving the hurdle of retraining a machine learning model.

Attention-Driven Reasoning: Unlocking the Potential of Large Language Models

1 code implementation22 Mar 2024 Bingli Liao, Danilo Vasconcellos Vargas

Large Language Models (LLMs) have shown remarkable capabilities, but their reasoning abilities and underlying mechanisms remain poorly understood.

k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis

1 code implementation7 Dec 2023 Shashank Kotyan, Ueda Tatsuya, Danilo Vasconcellos Vargas

While these methods effectively capture the overall sample distribution in the entire learned latent space, they tend to distort the structure of sample distributions within specific classes in the subset of the latent space.

Dimensionality Reduction

Synthetic Shifts to Initial Seed Vector Exposes the Brittle Nature of Latent-Based Diffusion Models

no code implementations24 Nov 2023 Mao Po-Yuan, Shashank Kotyan, Tham Yik Foong, Danilo Vasconcellos Vargas

To understand the impact of the initial seed vector on generated samples, we propose a reliability evaluation framework that evaluates the generated samples of a diffusion model when the initial seed vector is subjected to various synthetic shifts.

Image Generation

Towards Improving Robustness Against Common Corruptions using Mixture of Class Specific Experts

no code implementations16 Nov 2023 Shashank Kotyan, Danilo Vasconcellos Vargas

Through this contribution, the paper aims to foster a deeper understanding of neural network limitations and proposes a practical approach to enhance their resilience in the face of evolving and unpredictable conditions.

Data Augmentation

Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning

no code implementations14 Nov 2023 Shashank Kotyan, Danilo Vasconcellos Vargas

Neural networks have revolutionized various domains, exhibiting remarkable accuracy in tasks like natural language processing and computer vision.

Autonomous Driving Contrastive Learning

Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation

no code implementations1 Nov 2023 Shashank Kotyan, Danilo Vasconcellos Vargas

In conclusion, this work contributes to the ongoing research on Vision Transformers by introducing Dynamic Scanning Augmentation as a technique for improving the accuracy and robustness of ViT.

Symmetrical SyncMap for Imbalanced General Chunking Problems

no code implementations16 Oct 2023 Heng Zhang, Danilo Vasconcellos Vargas

The main idea is to apply equal updates from negative and positive feedback loops by symmetrical activation.

Chunking

A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

no code implementations27 Jul 2023 Heng Zhang, Danilo Vasconcellos Vargas

Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected.

Generating Oscillation Activity with Echo State Network to Mimic the Behavior of a Simple Central Pattern Generator

no code implementations19 Jun 2023 Tham Yik Foong, Danilo Vasconcellos Vargas

However, we find that a reservoir that develops oscillatory activity without any external excitation can mimic the behaviour of a simple CPG in biological systems.

Time Series

Dynamical Equations With Bottom-up Self-Organizing Properties Learn Accurate Dynamical Hierarchies Without Any Loss Function

no code implementations4 Feb 2023 Danilo Vasconcellos Vargas, Tham Yik Foong, Heng Zhang

The hurdle is that general patterns are difficult to define in terms of dynamical equations and designing a system that could learn by reordering itself is still to be seen.

Deep neural network loses attention to adversarial images

no code implementations10 Jun 2021 Shashank Kotyan, Danilo Vasconcellos Vargas

We also analyse how different adversarial samples distort the attention of the neural network compared to original samples.

Image Classification

Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation

no code implementations2 Sep 2020 Danilo Vasconcellos Vargas, Bingli Liao, Takahiro Kanzaki

Thus, $\varphi$DNNs reveal that input recreation has strong benefits for artificial neural networks similar to biological ones, shedding light into the importance of purposely corrupting the input as well as pioneering an area of perception models based on GANs and autoencoders for robust recognition in artificial intelligence.

Adversarial Robustness Super-Resolution

Continual General Chunking Problem and SyncMap

1 code implementation14 Jun 2020 Danilo Vasconcellos Vargas, Toshitake Asabuki

Here, we propose a continual generalization of the chunking problem (an unsupervised problem), encompassing fixed and probabilistic chunks, discovery of temporal and causal structures and their continual variations.

Chunking

Representation Quality Explain Adversarial Attacks

no code implementations25 Sep 2019 Danilo Vasconcellos Vargas, Shashank Kotyan, Moe Matsuki

The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary).

Evolving Robust Neural Architectures to Defend from Adversarial Attacks

1 code implementation27 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas

By creating a novel neural architecture search with options for dense layers to connect with convolution layers and vice-versa as well as the addition of concatenation layers in the search, we were able to evolve an architecture that is inherently accurate on adversarial samples.

Neural Architecture Search

Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences

1 code implementation15 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Moe Matsuki

A crucial step to understanding the rationale for this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features.

Clustering Zero-Shot Learning

Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks Are Necessary

1 code implementation14 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas

There exists a vast number of adversarial attacks and defences for machine learning algorithms of various types which makes assessing the robustness of algorithms a daunting task.

Adversarial Robustness Image Classification

Self Training Autonomous Driving Agent

no code implementations26 Apr 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Venkanna U

Intrinsically, driving is a Markov Decision Process which suits well the reinforcement learning paradigm.

Autonomous Driving reinforcement-learning +1

Batch Tournament Selection for Genetic Programming

no code implementations18 Apr 2019 Vinicius V. Melo, Danilo Vasconcellos Vargas, Wolfgang Banzhaf

Lexicase selection achieves very good solution quality by introducing ordered test cases.

Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis

no code implementations8 Feb 2019 Danilo Vasconcellos Vargas, Jiawei Su

Deep neural networks were shown to be vulnerable to single pixel modifications.

Universal Rules for Fooling Deep Neural Networks based Text Classification

no code implementations22 Jan 2019 Di Li, Danilo Vasconcellos Vargas, Sakurai Kouichi

Here, we go beyond attacks to investigate, for the first time, universal rules, i. e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one.

General Classification text-classification +1

Spectrum-Diverse Neuroevolution with Unified Neural Models

1 code implementation6 Jan 2019 Danilo Vasconcellos Vargas, Junichi Murata

The combination of Spectrum Diversity with a unified neuron representation enables the algorithm to either surpass or equal NeuroEvolution of Augmenting Topologies (NEAT) on all of the five classes of problems tested.

Self Organizing Classifiers and Niched Fitness

no code implementations20 Nov 2018 Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata

In fact, the proposed algorithm possesses a dynamical population structure that self-organizes itself to better project the input space into a map.

Contingency Training

no code implementations20 Nov 2018 Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata

Experiments are conducted with the contingency training applied to neural networks over traditional datasets as well as datasets with additional irrelevant variables.

feature selection

Novelty-organizing team of classifiers in noisy and dynamic environments

no code implementations19 Sep 2018 Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata

Moreover, NOTC is compared with NeuroEvolution of Augmenting Topologies (NEAT) in these problems, revealing a trade-off between the approaches.

Attacking Convolutional Neural Network using Differential Evolution

no code implementations19 Apr 2018 Jiawei Su, Danilo Vasconcellos Vargas, Kouichi Sakurai

The attack only requires modifying 5 pixels with 20. 44, 14. 76 and 22. 98 pixel values distortion.

Lightweight Classification of IoT Malware based on Image Recognition

no code implementations11 Feb 2018 Jiawei Su, Danilo Vasconcellos Vargas, Sanjiva Prasad, Daniele Sgandurra, Yaokai Feng, Kouichi Sakurai

The Internet of Things (IoT) is an extension of the traditional Internet, which allows a very large number of smart devices, such as home appliances, network cameras, sensors and controllers to connect to one another to share information and improve user experiences.

Classification General Classification

One pixel attack for fooling deep neural networks

6 code implementations24 Oct 2017 Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi

The results show that 67. 97% of the natural images in Kaggle CIFAR-10 test dataset and 16. 04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74. 03% and 22. 91% confidence on average.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.