no code implementations • 27 Jun 2024 • Xiaoling Zhou, Wei Ye, Yidong Wang, Chaoya Jiang, Zhemg Lee, Rui Xie, Shikun Zhang
The emergence of in-context learning (ICL) enables large pre-trained language models (PLMs) to make predictions for unseen inputs without updating parameters.
no code implementations • 23 May 2024 • Xiaoling Zhou, Ou wu, Michael K. Ng, Hao Jiang
In this paper, we demonstrate that both global and local statistical information of value distributions hold significant potential for data valuation within the context of machine learning.
no code implementations • 25 Apr 2024 • Xiaoling Zhou, Wei Ye, Zhemg Lee, Rui Xie, Shikun Zhang
This insight leads us to develop a meta-learning-based framework for optimizing classifiers with this novel loss, introducing the effects of augmentation while bypassing the explicit augmentation process.
no code implementations • 26 Apr 2023 • Xiaoling Zhou, Ou wu, Michael K. Ng
Machine learning models are prone to capturing the spurious correlations between non-causal attributes and classes, with counterfactual data augmentation being a promising direction for breaking these spurious associations.
no code implementations • 25 Apr 2023 • Xiaoling Zhou, Nan Yang, Ou wu
On the basis of our theoretical findings, a more general learning objective that combines adversaries and anti-adversaries with varied bounds on each training sample is presented.
no code implementations • 12 Jan 2023 • Xiaoling Zhou, Ou wu, Weiyao Zhu, Ziyang Liang
In this study, we theoretically prove that the generalization error of a sample can be used as a universal difficulty measure.
no code implementations • 11 Oct 2021 • Xiaoling Zhou, Ou wu
Factors including the distribution of samples' learning difficulties and the validation data determine which samples should be learned first in a learning task.
no code implementations • 29 Sep 2021 • Xiaoling Zhou, Ou wu
Second, a flexible weighting scheme is proposed to overcome the defects of existing schemes.
1 code implementation • AAAI 2020 • Xiaoling Zhou, Yukai Miao, Wei Wang, Jianbin Qin
Traditional machine learning based methods for NED were outperformed and made obsolete by the state-of-the-art deep learning based models.
no code implementations • 13 Jun 2019 • Muhammad Asif Ali, Yifang Sun, Xiaoling Zhou, Wei Wang, Xiang Zhao
We hypothesize that the pre-trained embeddings comprehend a blend of lexical-semantic information and we may distill the task-specific information using Distiller, a model proposed in this paper.