3 code implementations • 7 Jan 2025 • Xinbin Yuan, Zhaohui Zheng, YuXuan Li, Xialei Liu, Li Liu, Xiang Li, Qibin Hou, Ming-Ming Cheng
While witnessed with rapid development, remote sensing object detection remains challenging for detecting high aspect ratio objects.
Ranked #1 on Object Detection In Aerial Images on DOTA (using extra training data)
1 code implementation • 19 Jul 2024 • Zhengyuan Xie, Haiquan Lu, Jia-Wen Xiao, Enguang Wang, Le Zhang, Xialei Liu
In this paper, we propose a new classifier pre-tuning~(NeST) method applied before the formal training process, learning a transformation from old classifiers to generate new classifiers for initialization rather than directly tuning the parameters of new classifiers.
1 code implementation • 19 Jul 2024 • Linlan Huang, Xusheng Cao, Haori Lu, Xialei Liu
Most existing works with pre-trained models assume that the forgetting of old classes is uniform when the model acquires new knowledge.
1 code implementation • 27 Mar 2024 • Xusheng Cao, Haori Lu, Linlan Huang, Xialei Liu, Ming-Ming Cheng
In class-incremental learning (CIL) scenarios, the phenomenon of catastrophic forgetting caused by the classifier's bias towards the current task has long posed a significant challenge.
1 code implementation • 15 Mar 2024 • Enguang Wang, Zhimao Peng, Zhengyuan Xie, Fei Yang, Xialei Liu, Ming-Ming Cheng
Specifically, our TES leverages the property that CLIP can generate aligned vision-language features, converting visual embeddings into tokens of the CLIP's text encoder to generate pseudo text embeddings.
1 code implementation • CVPR 2024 • Xusheng Cao, Haori Lu, Linlan Huang, Xialei Liu, Ming-Ming Cheng
In class incremental learning (CIL) scenarios the phenomenon of catastrophic forgetting caused by the classifier's bias towards the current task has long posed a significant challenge.
1 code implementation • 20 Dec 2023 • Jiang-Tian Zhai, Xialei Liu, Lu Yu, Ming-Ming Cheng
Considering this challenge, we propose a novel framework of fine-grained knowledge selection and restoration.
no code implementations • 31 Oct 2023 • Xialei Liu, Xusheng Cao, Haori Lu, Jia-Wen Xiao, Andrew D. Bagdanov, Ming-Ming Cheng
We also propose a method for parameter retention in the adapter layers that uses a measure of parameter importance to better maintain stability and plasticity during incremental learning.
1 code implementation • ICCV 2023 • Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
Moreover, MAEs can reliably reconstruct original input images from randomly selected patches, which we use to store exemplars from past tasks more efficiently for CIL.
1 code implementation • ICCV 2023 • Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo, Xialei Liu, Chongyi Li, Ming-Ming Cheng
However, these methods are impeded by several critical limitations: a) the explicit calibration process is both labor- and time-intensive, b) challenge exists in transferring denoisers across different camera models, and c) the disparity between synthetic and real noise is exacerbated by digital gain.
Ranked #1 on Image Denoising on SID SonyA7S2 x300
no code implementations • CVPR 2023 • Jia-Wen Xiao, Chang-Bin Zhang, Jiekang Feng, Xialei Liu, Joost Van de Weijer, Ming-Ming Cheng
In our method, the model containing old knowledge is fused with the model retaining new knowledge in a dynamic fusion manner, strengthening the memory of old classes in ever-changing distributions.
class-incremental learning Class-Incremental Semantic Segmentation +2
1 code implementation • CVPR 2024 • Xialei Liu, Jiang-Tian Zhai, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
EFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning.
1 code implementation • 4 Oct 2022 • Kai Wang, Chenshen Wu, Andy Bagdanov, Xialei Liu, Shiqi Yang, Shangling Jui, Joost Van de Weijer
Lifelong object re-identification incrementally learns from a stream of re-identification tasks.
1 code implementation • 1 Oct 2022 • Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
However, conventional CIL methods consider a balanced distribution for each new task, which ignores the prevalence of long-tailed distributions in the real world.
2 code implementations • 6 Apr 2022 • Wei-Hong Li, Xialei Liu, Hakan Bilen
We propose a unified look at jointly learning multiple vision tasks and visual domains through universal representations, a single deep neural network.
1 code implementation • CVPR 2022 • Chang-Bin Zhang, Jia-Wen Xiao, Xialei Liu, Ying-Cong Chen, Ming-Ming Cheng
In this work, we study the continual semantic segmentation problem, where the deep neural networks are required to incorporate new classes continually without catastrophic forgetting.
Ranked #1 on Domain 1-1 on Cityscapes
Class Incremental Learning Continual Semantic Segmentation +16
1 code implementation • CVPR 2022 • Wei-Hong Li, Xialei Liu, Hakan Bilen
Despite the recent advances in multi-task learning of dense prediction problems, most methods rely on expensive labelled datasets.
1 code implementation • 9 Nov 2021 • Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost Van de Weijer
We propose an approach to IML, which we call Episodic Replay Distillation (ERD), that mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning.
1 code implementation • 21 Oct 2021 • Kai Wang, Xialei Liu, Luis Herranz, Joost Van de Weijer
To overcome forgetting in this benchmark, we propose Hierarchy-Consistency Verification (HCV) as an enhancement to existing continual learning methods.
4 code implementations • CVPR 2022 • Wei-Hong Li, Xialei Liu, Hakan Bilen
In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples.
Ranked #4 on Few-Shot Image Classification on Meta-Dataset
cross-domain few-shot learning Few-Shot Image Classification
5 code implementations • ICCV 2021 • Wei-Hong Li, Xialei Liu, Hakan Bilen
In this paper, we look at the problem of few-shot classification that aims to learn a classifier for previously unseen classes and domains from few labeled samples.
Ranked #6 on Few-Shot Image Classification on Meta-Dataset
no code implementations • 6 Dec 2020 • Lu Yu, Xialei Liu, Joost Van de Weijer
In class-incremental semantic segmentation, we have no access to the labeled data of previous tasks.
Class-Incremental Semantic Segmentation Incremental Learning
1 code implementation • 28 Oct 2020 • Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost Van de Weijer
For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored -- also important when privacy limitations are imposed; and learning that more closely resembles human learning.
no code implementations • 31 Jul 2020 • Minghan Li, Xialei Liu, Joost Van de Weijer, Bogdan Raducanu
Active learning emerged as an alternative to alleviate the effort to label huge amount of data for data hungry applications (such as image/video indexing and retrieval, autonomous driving, etc.).
1 code implementation • 20 Apr 2020 • Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost Van de Weijer
To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor.
2 code implementations • CVPR 2020 • Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost Van de Weijer
The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes.
no code implementations • 13 Feb 2020 • Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
For the difficult cases, where the domain gaps and especially category differences are large, we explore three different exemplar sampling methods and show the proposed adaptive sampling method is effective to select diverse and informative samples from entire datasets, to further prevent forgetting.
1 code implementation • CVPR 2019 • Lu Yu, Vacit Oguz Yazici, Xialei Liu, Joost Van de Weijer, Yongmei Cheng, Arnau Ramisa
In this paper, we propose to use network distillation to efficiently compute image embeddings with small networks.
2 code implementations • 17 Feb 2019 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
1 code implementation • NeurIPS 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
2 code implementations • 6 Sep 2018 • Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu
In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.
1 code implementation • CVPR 2018 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework.
Ranked #19 on Crowd Counting on UCF CC 50
2 code implementations • 8 Feb 2018 • Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov
In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.
2 code implementations • ICCV 2017 • Xialei Liu, Joost Van de Weijer, Andrew D. Bagdanov
Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.