Search Results for author: James Seale Smith

Found 11 papers, 5 papers with code

Adaptive Memory Replay for Continual Learning

no code implementations18 Apr 2024 James Seale Smith, Lazar Valkov, Shaunak Halbe, Vyshnavi Gutta, Rogerio Feris, Zsolt Kira, Leonid Karlinsky

This continual learning (CL) phenomenon has been extensively studied, but primarily in a setting where only a small amount of past data can be stored.

Continual Learning

Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters

no code implementations30 Nov 2023 James Seale Smith, Yen-Chang Hsu, Zsolt Kira, Yilin Shen, Hongxia Jin

We show that STAMINA outperforms the prior SOTA for the setting of text-to-image continual customization on a 50-concept benchmark composed of landmarks and human faces, with no stored replay data.

Continual Learning Hard Attention +1

HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning

no code implementations16 Jun 2023 Shaunak Halbe, James Seale Smith, Junjiao Tian, Zsolt Kira

In this paper, we attempt to tackle forgetting and heterogeneity while minimizing overhead costs and without requiring access to any stored data.

Federated Learning Image Classification

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

no code implementations12 Apr 2023 James Seale Smith, Yen-Chang Hsu, Lingyu Zhang, Ting Hua, Zsolt Kira, Yilin Shen, Hongxia Jin

We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification.

Continual Learning Image Classification

Going Beyond Nouns With Vision & Language Models Using Synthetic Data

1 code implementation ICCV 2023 Paola Cascante-Bonilla, Khaled Shehada, James Seale Smith, Sivan Doveh, Donghyun Kim, Rameswar Panda, Gül Varol, Aude Oliva, Vicente Ordonez, Rogerio Feris, Leonid Karlinsky

We contribute Synthetic Visual Concepts (SyViC) - a million-scale synthetic dataset and data generation codebase allowing to generate additional suitable data to improve VLC understanding and compositional reasoning of VL models.

Sentence Visual Reasoning

On the Transferability of Visual Features in Generalized Zero-Shot Learning

1 code implementation22 Nov 2022 Paola Cascante-Bonilla, Leonid Karlinsky, James Seale Smith, Yanjun Qi, Vicente Ordonez

Generalized Zero-Shot Learning (GZSL) aims to train a classifier that can generalize to unseen classes, using a set of attributes as auxiliary information, and the visual features extracted from a pre-trained convolutional neural network.

Generalized Zero-Shot Learning Knowledge Distillation +2

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

1 code implementation CVPR 2023 James Seale Smith, Paola Cascante-Bonilla, Assaf Arbelle, Donghyun Kim, Rameswar Panda, David Cox, Diyi Yang, Zsolt Kira, Rogerio Feris, Leonid Karlinsky

This leads to reasoning mistakes, which need to be corrected as they occur by teaching VL models the missing SVLC skills; often this must be done using private data where the issue was found, which naturally leads to a data-free continual (no task-id) VL learning setting.

FedFOR: Stateless Heterogeneous Federated Learning with First-Order Regularization

1 code implementation21 Sep 2022 Junjiao Tian, James Seale Smith, Zsolt Kira

For the more typical applications of FL where the number of clients is large (e. g., edge-device and mobile applications), these methods cannot be applied, motivating the need for a stateless approach to heterogeneous FL which can be used for any number of clients.

Federated Learning

Incremental Learning with Differentiable Architecture and Forgetting Search

no code implementations19 May 2022 James Seale Smith, Zachary Seymour, Han-Pang Chiu

As progress is made on training machine learning models on incrementally expanding classification tasks (i. e., incremental learning), a next step is to translate this progress to industry expectations.

Classification Image Classification +2

A Closer Look at Rehearsal-Free Continual Learning

no code implementations31 Mar 2022 James Seale Smith, Junjiao Tian, Shaunak Halbe, Yen-Chang Hsu, Zsolt Kira

Next, we explore how to leverage knowledge from a pre-trained model in rehearsal-free continual learning and find that vanilla L2 parameter regularization outperforms EWC parameter regularization and feature distillation.

Continual Learning Knowledge Distillation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.