Search Results for author: Shixian Wen

Found 8 papers, 1 papers with code

Overcoming catastrophic forgetting through weight consolidation and long-term memory

no code implementations ICLR 2019 Shixian Wen, Laurent Itti

We obtain similar results with a much more difficult disjoint CIFAR10 task (70. 10% initial task 1 performance, 67. 73% after learning tasks 2 and 3 for AD+EWC, while PGD and EWC both fall to chance level).

Task 2

Lightweight Learner for Shared Knowledge Lifelong Learning

1 code implementation24 May 2023 Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti

We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.

Image Classification

What can we learn from misclassified ImageNet images?

no code implementations20 Jan 2022 Shixian Wen, Amanda Sofie Rios, Kiran Lekkala, Laurent Itti

Hence, we propose a two-stage Super-Sub framework, and demonstrate that: (i) The framework improves overall classification performance by 3. 3%, by first inferring a superclass using a generalist superclass-level network, and then using a specialized network for final subclass-level classification.

Quantization

Beneficial Perturbations Network for Defending Adversarial Examples

no code implementations27 Sep 2020 Shixian Wen, Amanda Rios, Laurent Itti

The reason is that neural networks fail to accommodate the distribution drift of the input data caused by adversarial perturbations.

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

no code implementations27 Sep 2020 Shixian Wen, Amanda Rios, Yunhao Ge, Laurent Itti

Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks.

Continual Learning

Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system

no code implementations9 Oct 2019 Shixian Wen, Laurent Itti

Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks.

Beneficial perturbation network for continual learning

no code implementations22 Jun 2019 Shixian Wen, Laurent Itti

Sequential learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby previously learned knowledge is erased during learning of new, disjoint knowledge.

Continual Learning

Overcoming catastrophic forgetting problem by weight consolidation and long-term memory

no code implementations18 May 2018 Shixian Wen, Laurent Itti

We apply our method to sequentially learning to classify digits 0, 1, 2 (task 1), 4, 5, 6, (task 2), and 7, 8, 9 (task 3) in MNIST (disjoint MNIST task).

Task 2

Cannot find the paper you are looking for? You can Submit a new open access paper.