Search Results for author: Li-Ming Zhan

Found 9 papers, 7 papers with code

A Closer Look at Few-Shot Out-of-Distribution Intent Detection

1 code implementation COLING 2022 Li-Ming Zhan, Haowen Liang, Lu Fan, Xiao-Ming Wu, Albert Y.S. Lam

Comprehensive experiments on three real-world intent detection benchmark datasets demonstrate the high effectiveness of our proposed approach and its great potential in improving state-of-the-art methods for few-shot OOD intent detection.

Intent Detection Task-Oriented Dialogue Systems

New Intent Discovery with Pre-training and Contrastive Learning

1 code implementation ACL 2022 Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, Albert Y. S. Lam

Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate.

Clustering Contrastive Learning +3

Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima

1 code implementation NeurIPS 2021 Guangyuan Shi, Jiaxin Chen, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu

Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting.

Few-Shot Class-Incremental Learning Few-Shot Learning +1

Adaptation-Agnostic Meta-Training

1 code implementation ICML Workshop AutoML 2021 Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung

Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update.


Out-of-Scope Intent Detection with Self-Supervision and Discriminative Training

no code implementations ACL 2021 Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, Xiao-Ming Wu, Albert Y. S. Lam

Since the distribution of outlier utterances is arbitrary and unknown in the training stage, existing methods commonly rely on strong assumptions on data distribution such as mixture of Gaussians to make inference, resulting in either complex multi-step training procedures or hand-crafted rules such as confidence threshold selection for outlier detection.

Intent Detection Outlier Detection +1

A Closer Look at the Training Strategy for Modern Meta-Learning

1 code implementation NeurIPS 2020 Jiaxin Chen, Xiao-Ming Wu, Yanke Li, Qimai Li, Li-Ming Zhan, Fu-Lai Chung

The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments.

Few-Shot Learning

Variational Metric Scaling for Metric-Based Meta-Learning

1 code implementation26 Dec 2019 Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung

In this paper, we recast metric-based meta-learning from a Bayesian perspective and develop a variational metric scaling framework for learning a proper metric scaling parameter.

Few-Shot Learning Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.