Search Results for author: Tyler L. Hayes

Found 18 papers, 8 papers with code

System Design for an Integrated Lifelong Reinforcement Learning Agent for Real-Time Strategy Games

no code implementations8 Dec 2022 Indranil Sur, Zachary Daniels, Abrar Rahman, Kamil Faber, Gianmarco J. Gallardo, Tyler L. Hayes, Cameron E. Taylor, Mustafa Burak Gurbuz, James Smith, Sahana Joshi, Nathalie Japkowicz, Michael Baron, Zsolt Kira, Christopher Kanan, Roberto Corizzo, Ajay Divakaran, Michael Piacentino, Jesse Hostetler, Aswin Raghavan

In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system.

Continual Learning reinforcement-learning +2

Online Continual Learning for Embedded Devices

1 code implementation21 Mar 2022 Tyler L. Hayes, Christopher Kanan

Real-time on-device continual learning is needed for new applications such as home robots, user personalization on smartphones, and augmented/virtual reality headsets.

Continual Learning

Can I see an Example? Active Learning the Long Tail of Attributes and Relations

no code implementations11 Mar 2022 Tyler L. Hayes, Maximilian Nickel, Christopher Kanan, Ludovic Denoyer, Arthur Szlam

Using this framing, we introduce an active sampling method that asks for examples from the tail of the data distribution and show that it outperforms classical active learning methods on Visual Genome.

Active Learning

Disentangling Transfer and Interference in Multi-Domain Learning

no code implementations2 Jul 2021 YiPeng Zhang, Tyler L. Hayes, Christopher Kanan

Humans are incredibly good at transferring knowledge from one domain to another, enabling rapid learning of new tasks.

Transfer Learning

Replay in Deep Learning: Current Approaches and Missing Biological Elements

no code implementations1 Apr 2021 Tyler L. Hayes, Giri P. Krishnan, Maxim Bazhenov, Hava T. Siegelmann, Terrence J. Sejnowski, Christopher Kanan

Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences.

Retrieval

Self-Supervised Training Enhances Online Continual Learning

no code implementations25 Mar 2021 Jhair Gallardo, Tyler L. Hayes, Christopher Kanan

In continual learning, a system must incrementally learn from a non-stationary data stream without catastrophic forgetting.

Continual Learning Image Classification

Selective Replay Enhances Learning in Online Continual Analogical Reasoning

1 code implementation6 Mar 2021 Tyler L. Hayes, Christopher Kanan

Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed.

Continual Learning Image Classification

Improved Robustness to Open Set Inputs via Tempered Mixup

no code implementations10 Sep 2020 Ryne Roady, Tyler L. Hayes, Christopher Kanan

Supervised classification methods often assume that evaluation data is drawn from the same distribution as training data and that all classes are present for training.

Classification General Classification

RODEO: Replay for Online Object Detection

1 code implementation14 Aug 2020 Manoj Acharya, Tyler L. Hayes, Christopher Kanan

Humans can incrementally learn to do new visual detection tasks, which is a huge challenge for today's computer vision systems.

class-incremental learning Incremental Learning +2

Stream-51: Streaming Classification and Novelty Detection from Videos

1 code implementation14 Jun 2020 Ryne Roady, Tyler L. Hayes, Hitesh Vaidya, Christopher Kanan

In this work, we introduce Stream-51, a new dataset for streaming classification consisting of temporally correlated images from 51 distinct object categories and additional evaluation classes outside of the training distribution to test novelty recognition.

Classification General Classification +3

Do We Need Fully Connected Output Layers in Convolutional Networks?

no code implementations28 Apr 2020 Zhongchao Qian, Tyler L. Hayes, Kushal Kafle, Christopher Kanan

Traditionally, deep convolutional neural networks consist of a series of convolutional and pooling layers followed by one or more fully connected (FC) layers to perform the final classification.

General Classification

Are Out-of-Distribution Detection Methods Effective on Large-Scale Datasets?

no code implementations30 Oct 2019 Ryne Roady, Tyler L. Hayes, Ronald Kemker, Ayesha Gonzales, Christopher Kanan

We found that input perturbation and temperature scaling yield the best performance on large scale datasets regardless of the feature space regularization strategy.

General Classification Image Classification +3

REMIND Your Neural Network to Prevent Catastrophic Forgetting

1 code implementation ECCV 2020 Tyler L. Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, Christopher Kanan

While there is neuroscientific evidence that the brain replays compressed memories, existing methods for convolutional networks replay raw images.

Quantization Question Answering +1

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis

2 code implementations4 Sep 2019 Tyler L. Hayes, Christopher Kanan

By combining streaming linear discriminant analysis with deep learning, we are able to outperform both incremental batch learning and streaming learning algorithms on both ImageNet ILSVRC-2012 and CORe50, a dataset that involves learning to classify from temporally ordered samples.

BIG-bench Machine Learning

Memory Efficient Experience Replay for Streaming Learning

1 code implementation16 Sep 2018 Tyler L. Hayes, Nathan D. Cahill, Christopher Kanan

We find that full rehearsal can eliminate catastrophic forgetting in a variety of streaming learning settings, with ExStream performing well using far less memory and computation.

Compassionately Conservative Balanced Cuts for Image Segmentation

no code implementations CVPR 2018 Nathan D. Cahill, Tyler L. Hayes, Renee T. Meinhold, John F. Hamilton

The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings.

BSDS500 graph partitioning +2

Efficiently Computing Piecewise Flat Embeddings for Data Clustering and Image Segmentation

no code implementations20 Dec 2016 Renee T. Meinhold, Tyler L. Hayes, Nathan D. Cahill

Image segmentation is a popular area of research in computer vision that has many applications in automated image processing.

Image Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.