Search Results for author: Liyuan Wang

Found 18 papers, 13 papers with code

CoFInAl: Enhancing Action Quality Assessment with Coarse-to-Fine Instruction Alignment

1 code implementation22 Apr 2024 Kanglei Zhou, Junlin Li, Ruizhi Cai, Liyuan Wang, Xingxing Zhang, Xiaohui Liang

However, this common strategy yields suboptimal results due to the inherent struggle of these backbones to capture the subtle cues essential for AQA.

Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation

1 code implementation30 Mar 2024 HongWei Yan, Liyuan Wang, Kaisheng Ma, Yi Zhong

However, a notable gap from CL to OCL stems from the additional overfitting-underfitting dilemma associated with the use of rehearsal buffers: the inadequate learning of new training samples (underfitting) and the repeated learning of a few old training samples (overfitting).

Continual Learning Knowledge Distillation

DualTeacher: Bridging Coexistence of Unlabelled Classes for Semi-supervised Incremental Object Detection

1 code implementation13 Dec 2023 Ziqi Yuan, Liyuan Wang, Wenbo Ding, Xingxing Zhang, Jiachen Zhong, Jianyong Ai, Jianmin Li, Jun Zhu

A commonly-used strategy for supervised IOD is to encourage the current model (as a student) to mimic the behavior of the old model (as a teacher), but it generally fails in SSIOD because a dominant number of object instances from old and new classes are coexisting and unlabelled, with the teacher only recognizing a fraction of them.

Object object-detection +1

Towards a General Framework for Continual Learning with Pre-training

1 code implementation21 Oct 2023 Liyuan Wang, Jingyi Xie, Xingxing Zhang, Hang Su, Jun Zhu

In this work, we present a general framework for continual learning of sequentially arrived tasks with the use of pre-training, which has emerged as a promising direction for artificial intelligence systems to accommodate real-world dynamics.

Continual Learning

Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality

1 code implementation NeurIPS 2023 Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, Jun Zhu

Following these empirical and theoretical insights, we propose Hierarchical Decomposition (HiDe-)Prompt, an innovative approach that explicitly optimizes the hierarchical components with an ensemble of task-specific prompts and statistics of both uninstructed and instructed representations, further with the coordination of a contrastive regularization strategy.

Continual Learning

Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence

1 code implementation29 Aug 2023 Liyuan Wang, Xingxing Zhang, Qian Li, Mingtian Zhang, Hang Su, Jun Zhu, Yi Zhong

Continual learning aims to empower artificial intelligence (AI) with strong adaptability to the real world.

Continual Learning

A Comprehensive Survey of Continual Learning: Theory, Method and Application

1 code implementation31 Jan 2023 Liyuan Wang, Xingxing Zhang, Hang Su, Jun Zhu

To cope with real-world dynamics, an intelligent system needs to incrementally acquire, update, accumulate, and exploit knowledge throughout its lifetime.

Continual Learning Learning Theory

PhyGNNet: Solving spatiotemporal PDEs with Physics-informed Graph Neural Network

no code implementations7 Aug 2022 Longxiang Jiang, Liyuan Wang, Xinkun Chu, Yonghao Xiao, Hao Zhang

Solving partial differential equations (PDEs) is an important research means in the fields of physics, biology, and chemistry.

Memory Replay with Data Compression for Continual Learning

1 code implementation ICLR 2022 Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu

In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.

Autonomous Driving Class Incremental Learning +5

AFEC: Active Forgetting of Negative Transfer in Continual Learning

1 code implementation NeurIPS 2021 Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong

Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.

Continual Learning Transfer Learning

Few-shot Continual Learning: a Brain-inspired Approach

no code implementations19 Apr 2021 Liyuan Wang, Qian Li, Yi Zhong, Jun Zhu

Our solution is based on the observation that continual learning of a task sequence inevitably interferes few-shot generalization, which makes it highly nontrivial to extend few-shot learning strategies to continual learning scenarios.

Continual Learning Few-Shot Learning

Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation

no code implementations5 Jan 2021 Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu

Semi-supervised domain adaptation (SSDA), which aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain, attracts increasing attention in recent years.

Domain Adaptation Semi-supervised Domain Adaptation

Triple Memory Networks: a Brain-Inspired Method for Continual Learning

1 code implementation6 Mar 2020 Liyuan Wang, Bo Lei, Qian Li, Hang Su, Jun Zhu, Yi Zhong

Continual acquisition of novel experience without interfering previously learned knowledge, i. e. continual learning, is critical for artificial neural networks, but limited by catastrophic forgetting.

Attribute Class Incremental Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.