no code implementations • ICLR 2019 • Shixian Wen, Laurent Itti
We obtain similar results with a much more difficult disjoint CIFAR10 task (70. 10% initial task 1 performance, 67. 73% after learning tasks 2 and 3 for AD+EWC, while PGD and EWC both fall to chance level).
no code implementations • 15 Dec 2022 • Yunhao Ge, Jiashu Xu, Brian Nlong Zhao, Laurent Itti, Vibhav Vineet
Finally, the third component creates a large-scale pseudo-labeled instance segmentation training dataset by compositing the foreground object masks onto the original and generated background images.
no code implementations • 4 Dec 2022 • Yunhao Ge, Jie Ren, Yuxiao Wang, Andrew Gallagher, Ming-Hsuan Yang, Laurent Itti, Hartwig Adam, Balaji Lakshminarayanan, Jiaping Zhao
We also show that our method improves across ImageNet shifted datasets and other model architectures such as LiT.
no code implementations • 26 Nov 2022 • Iordanis Fostiropoulos, Laurent Itti
Inspired by the recent success of prototypical and contrastive learning frameworks for both improving robustness and learning nuance invariant representations, we propose a training framework, $\textbf{Supervised Contrastive Prototype Learning}$ (SCPL).
1 code implementation • 22 Jul 2022 • Yunhao Ge, Harkirat Behl, Jiashu Xu, Suriya Gunasekar, Neel Joshi, Yale Song, Xin Wang, Laurent Itti, Vibhav Vineet
However, existing approaches either require human experts to manually tune each scene property or use automatic methods that provide little to no control; this requires rendering large amounts of random data variations, which is slow and is often suboptimal for the target domain.
1 code implementation • 19 Jul 2022 • Yunhao Ge, Yao Xiao, Zhi Xu, Xingrui Wang, Laurent Itti
We use human experiments to confirm that both HVE and humans predominantly use some specific features to support the classification of specific classes (e. g., texture is the dominant feature to distinguish a zebra from other quadrupeds, both for humans and HVE).
no code implementations • 20 Jun 2022 • Yunhao Ge, Jiashu Xu, Brian Nlong Zhao, Neel Joshi, Laurent Itti, Vibhav Vineet
For foreground object mask generation, we use a simple textual template with object class name as input to DALL-E to generate a diverse set of foreground images.
no code implementations • 13 Jun 2022 • Yunhao Ge, Sercan Ö. Arik, Jinsung Yoon, Ao Xu, Laurent Itti, Tomas Pfister
ISL splits the data into different environments, and learns a structure that is invariant to the target across different environments by imposing a consistency constraint.
no code implementations • 22 Feb 2022 • Sumedh A Sontakke, Buvaneswari Ramanan, Laurent Itti, Thomas Woo
Our work can be employed as a post-processing method whereby an inference-time ML system can convert a trained model into an OOD detector.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • 20 Jan 2022 • Shixian Wen, Amanda Sofie Rios, Kiran Lekkala, Laurent Itti
Hence, we propose a two-stage Super-Sub framework, and demonstrate that: (i) The framework improves overall classification performance by 3. 3%, by first inferring a superclass using a generalist superclass-level network, and then using a specialized network for final subclass-level classification.
no code implementations • 6 Dec 2021 • Yunhao Ge, Zhi Xu, Yao Xiao, Gan Xin, Yunkui Pang, Laurent Itti
(2) They lack convexity constraints, which is important for meaningfully manipulating specific attributes for downstream tasks.
no code implementations • 29 Oct 2021 • Sumedh A Sontakke, Stephen Iota, Zizhao Hu, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf
Extending the successes in supervised learning methods to the reinforcement learning (RL) setting, however, is difficult due to the data generating process - RL agents actively query their environment for data, and the data are a function of the policy followed by the agent.
Out of Distribution (OOD) Detection
Reinforcement Learning (RL)
no code implementations • 29 Sep 2021 • Yunhao Ge, Yao Xiao, Zhi Xu, Linwei Li, Ziyan Wu, Laurent Itti
Take image classification as an example, HNI visualizes the reasoning logic of a NN with class-specific Structural Concept Graphs (c-SCG), which are human-interpretable.
no code implementations • 8 Sep 2021 • Sumedh A Sontakke, Sumegh Roychowdhury, Mausoom Sarkar, Nikaash Puri, Balaji Krishnamurthy, Laurent Itti
Humans excel at learning long-horizon tasks from demonstrations augmented with textual commentary, as evidenced by the burgeoning popularity of tutorial videos online.
no code implementations • 30 May 2021 • Kiran Lekkala, Laurent Itti
In this paper, we try to improve exploration in Blackbox methods, particularly Evolution strategies (ES), when applied to Reinforcement Learning (RL) problems where intermediate waypoints/subgoals are available.
no code implementations • CVPR 2021 • Yunhao Ge, Yao Xiao, Zhi Xu, Meng Zheng, Srikrishna Karanam, Terrence Chen, Laurent Itti, Ziyan Wu
Despite substantial progress in applying neural networks (NN) to a wide variety of areas, they still largely suffer from a lack of transparency and interpretability.
1 code implementation • ICLR Workshop Neural_Compression 2021 • Yunhao Ge, Yunkui Pang, Linwei Li, Laurent Itti
We consider the problem of graph data compression and representation.
no code implementations • 1 Jan 2021 • Yunhao Ge, Gan Xin, Zhi Xu, Yao Xiao, Yunkui Pang, Yining HE, Laurent Itti
DEAE can become a generative model and synthesis semantic controllable samples by interpolating latent code, which can even synthesis novel attribute value never is shown in the original dataset.
no code implementations • 9 Nov 2020 • Amanda Rios, Laurent Itti
Supervised deep neural networks are known to undergo a sharp decline in the accuracy of older tasks when new tasks are learned, termed "catastrophic forgetting".
1 code implementation • 7 Oct 2020 • Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf
Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner.
1 code implementation • 6 Oct 2020 • Sumegh Roychowdhury, Sumedh A. Sontakke, Nikaash Puri, Mausoom Sarkar, Milan Aggarwal, Pinkesh Badjatiya, Balaji Krishnamurthy, Laurent Itti
Also, they are believed to be arranged hierarchically, allowing for an efficient representation of complex long-horizon experiences.
no code implementations • 27 Sep 2020 • Shixian Wen, Amanda Rios, Yunhao Ge, Laurent Itti
Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks.
no code implementations • 27 Sep 2020 • Shixian Wen, Amanda Rios, Laurent Itti
The reason is that neural networks fail to accommodate the distribution drift of the input data caused by adversarial perturbations.
1 code implementation • ICLR 2021 • Yunhao Ge, Sami Abu-El-Haija, Gan Xin, Laurent Itti
Visual cognition of primates is superior to that of artificial neural networks in its ability to 'envision' a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc.
no code implementations • 12 Jun 2020 • Kiran Lekkala, Laurent Itti
Our method improves performance on new, previously unseen environments, and is 1. 5x faster than standard existing meta learning methods using similar architectures.
1 code implementation • ECCV 2020 • Yunhao Ge, Jiaping Zhao, Laurent Itti
After training on unbalanced discrete poses (5 classes with 6 poses per object instance, plus 5 classes with only 2 poses), we show that OPT-Net can synthesize balanced continuous new poses along yaw and pitch axes with high quality.
no code implementations • 23 Nov 2019 • Kiran Lekkala, Sami Abu-El-Haija, Laurent Itti
Imitation learning has gained immense popularity because of its high sample-efficiency.
no code implementations • 9 Oct 2019 • Shixian Wen, Laurent Itti
Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks.
no code implementations • 25 Jun 2019 • Amy Zhang, Zachary C. Lipton, Luis Pineda, Kamyar Azizzadenesheli, Anima Anandkumar, Laurent Itti, Joelle Pineau, Tommaso Furlanello
In this paper, we propose an algorithm to approximate causal states, which are the coarsest partition of the joint history of actions and observations in partially-observable Markov decision processes (POMDP).
no code implementations • 22 Jun 2019 • Shixian Wen, Laurent Itti
Sequential learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby previously learned knowledge is erased during learning of new, disjoint knowledge.
no code implementations • 3 Nov 2018 • Amanda Rios, Laurent Itti
Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting.
no code implementations • 18 May 2018 • Shixian Wen, Laurent Itti
We apply our method to sequentially learning to classify digits 0, 1, 2 (task 1), 4, 5, 6, (task 2), and 7, 8, 9 (task 3) in MNIST (disjoint MNIST task).
1 code implementation • ICML 2018 • Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar
Knowledge distillation (KD) consists of transferring knowledge from one machine learning model (the teacher}) to another (the student).
no code implementations • 20 Jul 2016 • Jiaping Zhao, Chin-kai Chang, Laurent Itti
We show that disCNN achieves significantly better object recognition accuracies than AlexNet trained solely to predict object categories on the iLab-20M dataset, which is a large scale turntable dataset with detailed object pose and lighting information.
no code implementations • 20 Jul 2016 • Jiaping Zhao, Laurent Itti
Further more, we fine-tune object recognition on ImageNet by using the pretrained 2W-CNN and AlexNet features on iLab-20M, results show that significant improvements have been achieved, compared with training AlexNet from scratch.
no code implementations • 11 Jun 2016 • Jiaping Zhao, Zerong Xi, Laurent Itti
We propose to learn multiple local Mahalanobis distance metrics to perform k-nearest neighbor (kNN) classification of temporal sequences.
no code implementations • 7 Jun 2016 • Tommaso Furlanello, Jiaping Zhao, Andrew M. Saxe, Laurent Itti, Bosco S. Tjan
Continual Learning in artificial neural networks suffers from interference and forgetting when different tasks are learned sequentially.
2 code implementations • 6 Jun 2016 • Jiaping Zhao, Laurent Itti
Dynamic Time Warping (DTW) is an algorithm to align temporal sequences with possible local non-linear distortions, and has been widely applied to audio, video and graphics data alignments.
no code implementations • CVPR 2016 • Ali Borji, Saeed Izadi, Laurent Itti
Tolerance to image variations (e. g. translation, scale, pose, illumination, background) is an important desired property of any object recognition system, be it human or machine.
no code implementations • HLT 2016 • Linqing Liu, Yao Lu, Ye Luo, Renxian Zhang, Laurent Itti, Jianwei Lu
Spammer detection on social network is a challenging problem.
no code implementations • 4 Dec 2015 • Ali Borji, Saeed Izadi, Laurent Itti
Tolerance to image variations (e. g. translation, scale, pose, illumination) is an important desired property of any object recognition system, be it human or machine.
no code implementations • 27 Oct 2015 • Laurent Itti, Ali Borji
We focus on {\em computational models of attention} as defined by Tsotsos \& Rothenstein \shortcite{Tsotsos_Rothenstein11}: Models which can process any visual stimulus (typically, an image or video clip), which can possibly also be given some task definition, and which make predictions that can be compared to human or animal behavioral or physiological responses elicited by the same stimulus and task.
no code implementations • 24 Oct 2015 • Laurent Itti, Ali Borji
This chapter reviews recent computational models of visual attention.
no code implementations • CVPR 2015 • Jiaping Zhao, Christian Siagian, Laurent Itti
Predicting where humans will fixate in a scene has many practical applications.
2 code implementations • 14 May 2015 • Ali Borji, Laurent Itti
Saliency modeling has been an active research area in computer vision for about two decades.
no code implementations • CVPR 2014 • Ali Borji, Laurent Itti
Several decades of research in computer and primate vision have resulted in many models (some specialized for one problem, others more general) and invaluable experimental data.
no code implementations • NeurIPS 2013 • Ali Borji, Laurent Itti
Many real-world problems have complicated objective functions.
no code implementations • 22 Jul 2013 • Lucas Paletta, Laurent Itti, Björn Schuller, Fang Fang
This volume contains the papers accepted at the 6th International Symposium on Attention in Cognitive Systems (ISACS 2013), held in Beijing, August 5, 2013.