Search Results for author: Laurent Itti

Found 60 papers, 17 papers with code

Overcoming catastrophic forgetting through weight consolidation and long-term memory

no code implementations ICLR 2019 Shixian Wen, Laurent Itti

We obtain similar results with a much more difficult disjoint CIFAR10 task (70. 10% initial task 1 performance, 67. 73% after learning tasks 2 and 3 for AD+EWC, while PGD and EWC both fall to chance level).

Task 2

DreamDistribution: Prompt Distribution Learning for Text-to-Image Diffusion Models

no code implementations21 Dec 2023 Brian Nlong Zhao, Yuhang Xiao, Jiashu Xu, Xinyang Jiang, Yifan Yang, Dongsheng Li, Laurent Itti, Vibhav Vineet, Yunhao Ge

We introduce a solution that allows a pretrained T2I diffusion model to learn a set of soft prompts, enabling the generation of novel images by sampling prompts from the learned distribution.

Text to 3D

Value Explicit Pretraining for Learning Transferable Representations

no code implementations19 Dec 2023 Kiran Lekkala, Henghui Bao, Sumedh Sontakke, Laurent Itti

We propose Value Explicit Pretraining (VEP), a method that learns generalizable representations for transfer reinforcement learning.

Transfer Reinforcement Learning Visual Navigation

Evaluating Pretrained models for Deployable Lifelong Learning

no code implementations22 Nov 2023 Kiran Lekkala, Eshan Bhargava, Yunhao Ge, Laurent Itti

We create a novel benchmark for evaluating a Deployable Lifelong Learning system for Visual Reinforcement Learning (RL) that is pretrained on a curated dataset, and propose a novel Scalable Lifelong Learning system capable of retaining knowledge from the previously learnt RL tasks.

Atari Games Few-Shot Class-Incremental Learning +2

Bird's Eye View Based Pretrained World model for Visual Navigation

no code implementations28 Oct 2023 Kiran Lekkala, Chen Liu, Laurent Itti

We trained the model using data from a Differential drive robot in the CARLA simulator.

Navigate Visual Navigation

CLR: Channel-wise Lightweight Reprogramming for Continual Learning

1 code implementation ICCV 2023 Yunhao Ge, Yuecheng Li, Shuo Ni, Jiaping Zhao, Ming-Hsuan Yang, Laurent Itti

Reprogramming parameters are task-specific and exclusive to each task, which makes our method immune to catastrophic forgetting.

Continual Learning Image Classification

Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD Detection Using Text-image Models

no code implementations26 May 2023 Yunhao Ge, Jie Ren, Jiaping Zhao, KaiFeng Chen, Andrew Gallagher, Laurent Itti, Balaji Lakshminarayanan

Despite considerable effort, the problem remains significantly challenging in deep learning models due to their propensity to output over-confident predictions for OOD inputs.

Out of Distribution (OOD) Detection

Batch Model Consolidation: A Multi-Task Model Consolidation Framework

1 code implementation CVPR 2023 Iordanis Fostiropoulos, Jiaye Zhu, Laurent Itti

During the $\textit{consolidation}$ phase, we combine the learned knowledge on 'batches' of $\textit{expert models}$ using a $\textit{batched consolidation loss}$ in $\textit{memory}$ data that aggregates all buffers.

Continual Learning Image Classification

Lightweight Learner for Shared Knowledge Lifelong Learning

1 code implementation24 May 2023 Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti

We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.

Image Classification

Reproducibility Requires Consolidated Artifacts

no code implementations21 May 2023 Iordanis Fostiropoulos, Bowman Brown, Laurent Itti

Machine learning is facing a 'reproducibility crisis' where a significant number of works report failures when attempting to reproduce previously published results.

EM-Paste: EM-guided Cut-Paste with DALL-E Augmentation for Image-level Weakly Supervised Instance Segmentation

1 code implementation15 Dec 2022 Yunhao Ge, Jiashu Xu, Brian Nlong Zhao, Laurent Itti, Vibhav Vineet

Finally, the third component creates a large-scale pseudo-labeled instance segmentation training dataset by compositing the foreground object masks onto the original and generated background images.

Instance Segmentation Object +4

Supervised Contrastive Prototype Learning: Augmentation Free Robust Neural Network

no code implementations26 Nov 2022 Iordanis Fostiropoulos, Laurent Itti

Inspired by the recent success of prototypical and contrastive learning frameworks for both improving robustness and learning nuance invariant representations, we propose a training framework, $\textbf{Supervised Contrastive Prototype Learning}$ (SCPL).

Classification Contrastive Learning

Neural-Sim: Learning to Generate Training Data with NeRF

1 code implementation22 Jul 2022 Yunhao Ge, Harkirat Behl, Jiashu Xu, Suriya Gunasekar, Neel Joshi, Yale Song, Xin Wang, Laurent Itti, Vibhav Vineet

However, existing approaches either require human experts to manually tune each scene property or use automatic methods that provide little to no control; this requires rendering large amounts of random data variations, which is slow and is often suboptimal for the target domain.

Object Detection

Contributions of Shape, Texture, and Color in Visual Recognition

1 code implementation19 Jul 2022 Yunhao Ge, Yao Xiao, Zhi Xu, Xingrui Wang, Laurent Itti

We use human experiments to confirm that both HVE and humans predominantly use some specific features to support the classification of specific classes (e. g., texture is the dominant feature to distinguish a zebra from other quadrupeds, both for humans and HVE).

Attribute General Classification +2

DALL-E for Detection: Language-driven Compositional Image Synthesis for Object Detection

no code implementations20 Jun 2022 Yunhao Ge, Jiashu Xu, Brian Nlong Zhao, Neel Joshi, Laurent Itti, Vibhav Vineet

For foreground object mask generation, we use a simple textual template with object class name as input to DALL-E to generate a diverse set of foreground images.

Image Captioning Image Generation +4

Invariant Structure Learning for Better Generalization and Causal Explainability

no code implementations13 Jun 2022 Yunhao Ge, Sercan Ö. Arik, Jinsung Yoon, Ao Xu, Laurent Itti, Tomas Pfister

ISL splits the data into different environments, and learns a structure that is invariant to the target across different environments by imposing a consistency constraint.

Self-Supervised Learning

What can we learn from misclassified ImageNet images?

no code implementations20 Jan 2022 Shixian Wen, Amanda Sofie Rios, Kiran Lekkala, Laurent Itti

Hence, we propose a two-stage Super-Sub framework, and demonstrate that: (i) The framework improves overall classification performance by 3. 3%, by first inferring a superclass using a generalist superclass-level network, and then using a specialized network for final subclass-level classification.

Quantization

Encouraging Disentangled and Convex Representation with Controllable Interpolation Regularization

no code implementations6 Dec 2021 Yunhao Ge, Zhi Xu, Yao Xiao, Gan Xin, Yunkui Pang, Laurent Itti

(2) They lack convexity constraints, which is important for meaningfully manipulating specific attributes for downstream tasks.

Data Augmentation Disentanglement +2

GalilAI: Out-of-Task Distribution Detection using Causal Active Experimentation for Safe Transfer RL

no code implementations29 Oct 2021 Sumedh A Sontakke, Stephen Iota, Zizhao Hu, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf

Extending the successes in supervised learning methods to the reinforcement learning (RL) setting, however, is difficult due to the data generating process - RL agents actively query their environment for data, and the data are a function of the policy followed by the agent.

Out of Distribution (OOD) Detection Reinforcement Learning (RL)

Towards Generic Interface for Human-Neural Network Knowledge Exchange

no code implementations29 Sep 2021 Yunhao Ge, Yao Xiao, Zhi Xu, Linwei Li, Ziyan Wu, Laurent Itti

Take image classification as an example, HNI visualizes the reasoning logic of a NN with class-specific Structural Concept Graphs (c-SCG), which are human-interpretable.

Image Classification Zero-Shot Learning

Video2Skill: Adapting Events in Demonstration Videos to Skills in an Environment using Cyclic MDP Homomorphisms

no code implementations8 Sep 2021 Sumedh A Sontakke, Sumegh Roychowdhury, Mausoom Sarkar, Nikaash Puri, Balaji Krishnamurthy, Laurent Itti

Humans excel at learning long-horizon tasks from demonstrations augmented with textual commentary, as evidenced by the burgeoning popularity of tutorial videos online.

Decision Making

Shaped Policy Search for Evolutionary Strategies using Waypoints

no code implementations30 May 2021 Kiran Lekkala, Laurent Itti

In this paper, we try to improve exploration in Blackbox methods, particularly Evolution strategies (ES), when applied to Reinforcement Learning (RL) problems where intermediate waypoints/subgoals are available.

reinforcement-learning Reinforcement Learning (RL)

Generative Auto-Encoder: Non-adversarial Controllable Synthesis with Disentangled Exploration

no code implementations1 Jan 2021 Yunhao Ge, Gan Xin, Zhi Xu, Yao Xiao, Yunkui Pang, Yining HE, Laurent Itti

DEAE can become a generative model and synthesis semantic controllable samples by interpolating latent code, which can even synthesis novel attribute value never is shown in the original dataset.

Attribute Data Augmentation +2

Lifelong Learning Without a Task Oracle

no code implementations9 Nov 2020 Amanda Rios, Laurent Itti

Supervised deep neural networks are known to undergo a sharp decline in the accuracy of older tasks when new tasks are learned, termed "catastrophic forgetting".

Continual Learning

Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning

1 code implementation7 Oct 2020 Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf

Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner.

Representation Learning Zero-Shot Learning

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

no code implementations27 Sep 2020 Shixian Wen, Amanda Rios, Yunhao Ge, Laurent Itti

Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks.

Continual Learning

Beneficial Perturbations Network for Defending Adversarial Examples

no code implementations27 Sep 2020 Shixian Wen, Amanda Rios, Laurent Itti

The reason is that neural networks fail to accommodate the distribution drift of the input data caused by adversarial perturbations.

Zero-shot Synthesis with Group-Supervised Learning

1 code implementation ICLR 2021 Yunhao Ge, Sami Abu-El-Haija, Gan Xin, Laurent Itti

Visual cognition of primates is superior to that of artificial neural networks in its ability to 'envision' a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc.

Attentive Feature Reuse for Multi Task Meta learning

no code implementations12 Jun 2020 Kiran Lekkala, Laurent Itti

Our method improves performance on new, previously unseen environments, and is 1. 5x faster than standard existing meta learning methods using similar architectures.

Depth Estimation General Classification +3

Pose Augmentation: Class-agnostic Object Pose Transformation for Object Recognition

1 code implementation ECCV 2020 Yunhao Ge, Jiaping Zhao, Laurent Itti

After training on unbalanced discrete poses (5 classes with 6 poses per object instance, plus 5 classes with only 2 poses), we show that OPT-Net can synthesize balanced continuous new poses along yaw and pitch axes with high quality.

Object Object Recognition

Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system

no code implementations9 Oct 2019 Shixian Wen, Laurent Itti

Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks.

Learning Causal State Representations of Partially Observable Environments

no code implementations25 Jun 2019 Amy Zhang, Zachary C. Lipton, Luis Pineda, Kamyar Azizzadenesheli, Anima Anandkumar, Laurent Itti, Joelle Pineau, Tommaso Furlanello

In this paper, we propose an algorithm to approximate causal states, which are the coarsest partition of the joint history of actions and observations in partially-observable Markov decision processes (POMDP).

Causal Inference

Beneficial perturbation network for continual learning

no code implementations22 Jun 2019 Shixian Wen, Laurent Itti

Sequential learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby previously learned knowledge is erased during learning of new, disjoint knowledge.

Continual Learning

Closed-Loop Memory GAN for Continual Learning

no code implementations3 Nov 2018 Amanda Rios, Laurent Itti

Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting.

Continual Learning

Overcoming catastrophic forgetting problem by weight consolidation and long-term memory

no code implementations18 May 2018 Shixian Wen, Laurent Itti

We apply our method to sequentially learning to classify digits 0, 1, 2 (task 1), 4, 5, 6, (task 2), and 7, 8, 9 (task 3) in MNIST (disjoint MNIST task).

Task 2

Born Again Neural Networks

2 code implementations ICML 2018 Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar

Knowledge distillation (KD) consists of transferring knowledge from one machine learning model (the teacher}) to another (the student).

Image Classification Knowledge Distillation

Improved Deep Learning of Object Category using Pose Information

no code implementations20 Jul 2016 Jiaping Zhao, Laurent Itti

Further more, we fine-tune object recognition on ImageNet by using the pretrained 2W-CNN and AlexNet features on iLab-20M, results show that significant improvements have been achieved, compared with training AlexNet from scratch.

Object Object Recognition

Learning to Recognize Objects by Retaining other Factors of Variation

no code implementations20 Jul 2016 Jiaping Zhao, Chin-kai Chang, Laurent Itti

We show that disCNN achieves significantly better object recognition accuracies than AlexNet trained solely to predict object categories on the iLab-20M dataset, which is a large scale turntable dataset with detailed object pose and lighting information.

Object Object Recognition

metricDTW: local distance metric learning in Dynamic Time Warping

no code implementations11 Jun 2016 Jiaping Zhao, Zerong Xi, Laurent Itti

We propose to learn multiple local Mahalanobis distance metrics to perform k-nearest neighbor (kNN) classification of temporal sequences.

Dynamic Time Warping General Classification +4

Active Long Term Memory Networks

no code implementations7 Jun 2016 Tommaso Furlanello, Jiaping Zhao, Andrew M. Saxe, Laurent Itti, Bosco S. Tjan

Continual Learning in artificial neural networks suffers from interference and forgetting when different tasks are learned sequentially.

Continual Learning Domain Adaptation

shapeDTW: shape Dynamic Time Warping

2 code implementations6 Jun 2016 Jiaping Zhao, Laurent Itti

Dynamic Time Warping (DTW) is an algorithm to align temporal sequences with possible local non-linear distortions, and has been widely applied to audio, video and graphics data alignments.

Dynamic Time Warping Temporal Sequences +2

iLab-20M: A Large-Scale Controlled Object Dataset to Investigate Deep Learning

no code implementations CVPR 2016 Ali Borji, Saeed Izadi, Laurent Itti

Tolerance to image variations (e. g. translation, scale, pose, illumination, background) is an important desired property of any object recognition system, be it human or machine.

Domain Adaptation Object Recognition +1

What can we learn about CNNs from a large scale controlled object dataset?

no code implementations4 Dec 2015 Ali Borji, Saeed Izadi, Laurent Itti

Tolerance to image variations (e. g. translation, scale, pose, illumination) is an important desired property of any object recognition system, be it human or machine.

Domain Adaptation Object Recognition +1

Computational models: Bottom-up and top-down aspects

no code implementations27 Oct 2015 Laurent Itti, Ali Borji

We focus on {\em computational models of attention} as defined by Tsotsos \& Rothenstein \shortcite{Tsotsos_Rothenstein11}: Models which can process any visual stimulus (typically, an image or video clip), which can possibly also be given some task definition, and which make predictions that can be compared to human or animal behavioral or physiological responses elicited by the same stimulus and task.

Computational models of attention

no code implementations24 Oct 2015 Laurent Itti, Ali Borji

This chapter reviews recent computational models of visual attention.

CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research

2 code implementations14 May 2015 Ali Borji, Laurent Itti

Saliency modeling has been an active research area in computer vision for about two decades.

Human vs. Computer in Scene and Object Recognition

no code implementations CVPR 2014 Ali Borji, Laurent Itti

Several decades of research in computer and primate vision have resulted in many models (some specialized for one problem, others more general) and invaluable experimental data.

Object Object Recognition

6th International Symposium on Attention in Cognitive Systems 2013

no code implementations22 Jul 2013 Lucas Paletta, Laurent Itti, Björn Schuller, Fang Fang

This volume contains the papers accepted at the 6th International Symposium on Attention in Cognitive Systems (ISACS 2013), held in Beijing, August 5, 2013.

Cannot find the paper you are looking for? You can Submit a new open access paper.