Search Results for author: Ruiwen Li

Found 6 papers, 5 papers with code

Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation

1 code implementation9 May 2023 Tianle Chen, Zheda Mai, Ruiwen Li, Wei-Lun Chao

Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation.

Object Pseudo Label +2

TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation

1 code implementation14 Mar 2022 Ruiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, Scott Sanner

In this paper, we propose TransCAM, a Conformer-based solution to WSSS that explicitly leverages the attention weights from the transformer branch of the Conformer to refine the CAM generated from the CNN branch.

Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation

ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification

1 code implementation28 Nov 2021 Zhibo Zhang, Jongseong Jang, Chiheb Trabelsi, Ruiwen Li, Scott Sanner, Yeonjeong Jeong, Dongsub Shim

Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification.

Adversarial Robustness Classification +2

Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

3 code implementations22 Mar 2021 Zheda Mai, Ruiwen Li, Hyunwoo Kim, Scott Sanner

Online class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream, intending to adapt to new data while mitigating catastrophic forgetting.

Class Incremental Learning

Online Continual Learning in Image Classification: An Empirical Survey

1 code implementation25 Jan 2021 Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, Scott Sanner

To better understand the relative advantages of various approaches and the settings where they work best, this survey aims to (1) compare state-of-the-art methods such as MIR, iCARL, and GDumb and determine which works best at different experimental settings; (2) determine if the best class incremental methods are also competitive in domain incremental setting; (3) evaluate the performance of 7 simple but effective trick such as "review" trick and nearest class mean (NCM) classifier to assess their relative impact.

Classification Continual Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.