Search Results for author: Zheda Mai

Found 11 papers, 8 papers with code

Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation

1 code implementation9 May 2023 Tianle Chen, Zheda Mai, Ruiwen Li, Wei-Lun Chao

Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation.

Object Pseudo Label +2

Visual Query Tuning: Towards Effective Usage of Intermediate Representations for Parameter and Memory Efficient Transfer Learning

1 code implementation CVPR 2023 Cheng-Hao Tu, Zheda Mai, Wei-Lun Chao

Through introducing a handful of learnable ``query'' tokens to each layer, VQT leverages the inner workings of Transformers to ``summarize'' rich intermediate features of each layer, which can then be used to train the prediction heads of downstream tasks.

Transfer Learning

TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation

1 code implementation14 Mar 2022 Ruiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, Scott Sanner

In this paper, we propose TransCAM, a Conformer-based solution to WSSS that explicitly leverages the attention weights from the transformer branch of the Conformer to refine the CAM generated from the CNN branch.

Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation

Unintended Bias in Language Model-driven Conversational Recommendation

no code implementations17 Jan 2022 Tianshu Shen, Jiaru Li, Mohamed Reda Bouadjenek, Zheda Mai, Scott Sanner

Conversational Recommendation Systems (CRSs) have recently started to leverage pretrained language models (LM) such as BERT for their ability to semantically interpret a wide range of preference statement variations.

Language Modelling Recommendation Systems

Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

3 code implementations22 Mar 2021 Zheda Mai, Ruiwen Li, Hyunwoo Kim, Scott Sanner

Online class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream, intending to adapt to new data while mitigating catastrophic forgetting.

Class Incremental Learning

Online Continual Learning in Image Classification: An Empirical Survey

1 code implementation25 Jan 2021 Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, Scott Sanner

To better understand the relative advantages of various approaches and the settings where they work best, this survey aims to (1) compare state-of-the-art methods such as MIR, iCARL, and GDumb and determine which works best at different experimental settings; (2) determine if the best class incremental methods are also competitive in domain incremental setting; (3) evaluate the performance of 7 simple but effective trick such as "review" trick and nearest class mean (NCM) classifier to assess their relative impact.

Classification Continual Learning +2

Attentive Autoencoders for Multifaceted Preference Learning in One-class Collaborative Filtering

no code implementations24 Oct 2020 Zheda Mai, Ga Wu, Kai Luo, Scott Sanner

In order to capture multifaceted user preferences, existing recommender systems either increase the encoding complexity or extend the latent representation dimension.

Collaborative Filtering Recommendation Systems

CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions

1 code implementation14 Sep 2020 Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia, Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez, German I. Parisi, Nikhil Churamani, Marc Pickett, Issam Laradji, Davide Maltoni

In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous.

Benchmarking Continual Learning

Online Class-Incremental Continual Learning with Adversarial Shapley Value

3 code implementations31 Aug 2020 Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang

As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption.

Continual Learning Open-Ended Question Answering

Noise Contrastive Estimation for Autoencoding-based One-Class Collaborative Filtering

no code implementations3 Aug 2020 Jin Peng Zhou, Ga Wu, Zheda Mai, Scott Sanner

One-class collaborative filtering (OC-CF) is a common class of recommendation problem where only the positive class is explicitly observed (e. g., purchases, clicks).

Collaborative Filtering

Batch-level Experience Replay with Review for Continual Learning

1 code implementation11 Jul 2020 Zheda Mai, Hyunwoo Kim, Jihwan Jeong, Scott Sanner

Continual learning is a branch of deep learning that seeks to strike a balance between learning stability and plasticity.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.