no code implementations • 8 Aug 2024 • Gabriela Csurka, Tyler L. Hayes, Diane Larlus, Riccardo Volpi
In this work, our goal is to propose a simple yet effective solution to predict and describe via natural language potential failure modes of computer vision models.
1 code implementation • CVPR 2024 • Mingxuan Liu, Tyler L. Hayes, Elisa Ricci, Gabriela Csurka, Riccardo Volpi
Open-vocabulary object detection (OvOD) has transformed detection into a language-guided task, empowering users to freely define their class vocabularies of interest during inference.
1 code implementation • 27 Feb 2024 • Tyler L. Hayes, César R. de Souza, Namil Kim, Jiwon Kim, Riccardo Volpi, Diane Larlus
In this work, we look at ways to extend a detector trained for a set of base classes so it can i) spot the presence of novel classes, and ii) automatically enrich its repertoire to be able to detect those newly discovered classes together with the base ones.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 29 Mar 2023 • Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Christopher Kanan
There is more to continual learning than mitigating catastrophic forgetting.
1 code implementation • 19 Mar 2023 • Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Ronald Kemker, Christopher Kanan
Compared to REMIND and prior arts, SIESTA is far more computationally efficient, enabling continual learning on ImageNet-1K in under 2 hours on a single GPU; moreover, in the augmentation-free setting it matches the performance of the offline learner, a milestone critical to driving adoption of continual learning in real-world applications.
no code implementations • 8 Dec 2022 • Indranil Sur, Zachary Daniels, Abrar Rahman, Kamil Faber, Gianmarco J. Gallardo, Tyler L. Hayes, Cameron E. Taylor, Mustafa Burak Gurbuz, James Smith, Sahana Joshi, Nathalie Japkowicz, Michael Baron, Zsolt Kira, Christopher Kanan, Roberto Corizzo, Ajay Divakaran, Michael Piacentino, Jesse Hostetler, Aswin Raghavan
In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system.
1 code implementation • 21 Mar 2022 • Tyler L. Hayes, Christopher Kanan
Real-time on-device continual learning is needed for new applications such as home robots, user personalization on smartphones, and augmented/virtual reality headsets.
no code implementations • 11 Mar 2022 • Tyler L. Hayes, Maximilian Nickel, Christopher Kanan, Ludovic Denoyer, Arthur Szlam
Using this framing, we introduce an active sampling method that asks for examples from the tail of the data distribution and show that it outperforms classical active learning methods on Visual Genome.
no code implementations • 2 Jul 2021 • YiPeng Zhang, Tyler L. Hayes, Christopher Kanan
Humans are incredibly good at transferring knowledge from one domain to another, enabling rapid learning of new tasks.
no code implementations • 1 Apr 2021 • Tyler L. Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava T. Siegelmann, Terrence J. Sejnowski, Christopher Kanan
Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences.
4 code implementations • 1 Apr 2021 • Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin, Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian Popescu, Christopher Kanan, Joost Van de Weijer, Tinne Tuytelaars, Davide Bacciu, Davide Maltoni
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning.
no code implementations • 25 Mar 2021 • Jhair Gallardo, Tyler L. Hayes, Christopher Kanan
In continual learning, a system must incrementally learn from a non-stationary data stream without catastrophic forgetting.
1 code implementation • 6 Mar 2021 • Tyler L. Hayes, Christopher Kanan
Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed.
no code implementations • 10 Sep 2020 • Ryne Roady, Tyler L. Hayes, Christopher Kanan
Supervised classification methods often assume that evaluation data is drawn from the same distribution as training data and that all classes are present for training.
1 code implementation • 14 Aug 2020 • Manoj Acharya, Tyler L. Hayes, Christopher Kanan
Humans can incrementally learn to do new visual detection tasks, which is a huge challenge for today's computer vision systems.
1 code implementation • 14 Jun 2020 • Ryne Roady, Tyler L. Hayes, Hitesh Vaidya, Christopher Kanan
In this work, we introduce Stream-51, a new dataset for streaming classification consisting of temporally correlated images from 51 distinct object categories and additional evaluation classes outside of the training distribution to test novelty recognition.
no code implementations • 28 Apr 2020 • Zhongchao Qian, Tyler L. Hayes, Kushal Kafle, Christopher Kanan
Traditionally, deep convolutional neural networks consist of a series of convolutional and pooling layers followed by one or more fully connected (FC) layers to perform the final classification.
no code implementations • 30 Oct 2019 • Ryne Roady, Tyler L. Hayes, Ronald Kemker, Ayesha Gonzales, Christopher Kanan
We found that input perturbation and temperature scaling yield the best performance on large scale datasets regardless of the feature space regularization strategy.
1 code implementation • ECCV 2020 • Tyler L. Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, Christopher Kanan
While there is neuroscientific evidence that the brain replays compressed memories, existing methods for convolutional networks replay raw images.
2 code implementations • 4 Sep 2019 • Tyler L. Hayes, Christopher Kanan
By combining streaming linear discriminant analysis with deep learning, we are able to outperform both incremental batch learning and streaming learning algorithms on both ImageNet ILSVRC-2012 and CORe50, a dataset that involves learning to classify from temporally ordered samples.
1 code implementation • 16 Sep 2018 • Tyler L. Hayes, Nathan D. Cahill, Christopher Kanan
We find that full rehearsal can eliminate catastrophic forgetting in a variety of streaming learning settings, with ExStream performing well using far less memory and computation.
no code implementations • CVPR 2018 • Nathan D. Cahill, Tyler L. Hayes, Renee T. Meinhold, John F. Hamilton
The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings.
no code implementations • 20 Dec 2016 • Renee T. Meinhold, Tyler L. Hayes, Nathan D. Cahill
Image segmentation is a popular area of research in computer vision that has many applications in automated image processing.