1 code implementation • 31 Oct 2023 • Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong
The ability of data memorization and regurgitation in pretrained language models, revealed in previous studies, brings the risk of data leakage.
no code implementations • 12 Oct 2023 • Tianhao Lu, Chao Bian, Chao Qian
Meanwhile, we present a variant of OneMinMax, and prove that R-NSGA-II can be exponentially slower than NSGA-II.
no code implementations • 18 Jul 2023 • Yu-Ran Gu, Chao Bian, Chao Qian
Submodular maximization arises in many applications, and has attracted a lot of research attentions from various areas such as artificial intelligence, finance and operations research.
no code implementations • 5 Jun 2023 • Chao Bian, Yawen Zhou, Miqing Li, Chao Qian
This work is an attempt to challenge a common practice in the design of existing MOEAs.
1 code implementation • 23 Mar 2023 • Xinnian Liang, Shuangzhi Wu, Hui Huang, Jiaqi Bai, Chao Bian, Zhoujun Li
Retrieval augmented methods have shown promising results in various classification tasks.
1 code implementation • 20 Mar 2023 • Xinnian Liang, Zefan Zhou, Hui Huang, Shuangzhi Wu, Tong Xiao, Muyun Yang, Zhoujun Li, Chao Bian
We conduct extensive experiments on various Chinese NLP tasks to evaluate existing PLMs as well as the proposed MigBERT.
1 code implementation • 29 Jan 2023 • Xinnian Liang, Shuangzhi Wu, Chenhao Cui, Jiaqi Bai, Chao Bian, Zhoujun Li
The global one aims to identify vital sub-topics in the dialogue and the local one aims to select the most important context in each sub-topic.
no code implementations • 16 Dec 2022 • Weilong Dong, Xinwei Wu, Junzhuo Li, Shuangzhi Wu, Chao Bian, Deyi Xiong
It broadcasts the global model in the server to each client and produces pseudo data for clients so that knowledge from the global model can be explored to enhance few-shot learning of each client model.
no code implementations • 16 Dec 2022 • Junzhuo Li, Xinwei Wu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong
Knowledge distillation (KD) has been widely used for model compression and knowledge transfer.
no code implementations • 3 May 2022 • Chao Bian, Yawen Zhou, Chao Qian
We first show that the greedy algorithm can obtain an approximation ratio of $1-e^{-\beta\gamma}$, where $\beta$ and $\gamma$ are the correlation and submodularity ratios of the objective functions, respectively; and then propose EPORSS, an evolutionary Pareto optimization algorithm that can utilize more time to find better subsets.
no code implementations • 22 Mar 2022 • Chao Bian, Chao Qian
Evolutionary algorithms (EAs) have been widely used to solve multi-objective optimization problems, and have become the most popular tool.
no code implementations • 25 Feb 2022 • Haitao Liu, Kai Wu, Yew-Soon Ong, Chao Bian, Xiaomo Jiang, Xiaofang Wang
Multi-task Gaussian process (MTGP) is a well-known non-parametric Bayesian model for learning correlated tasks effectively by transferring knowledge across tasks.
no code implementations • 28 Jul 2019 • Chao Bian, Chao Qian, Yang Yu, Ke Tang
Sampling is a popular strategy, which evaluates the objective a couple of times, and employs the mean of these evaluation results as an estimate of the objective value.
no code implementations • 17 Jun 2019 • Chao Bian, Chao Qian, Ke Tang, Yang Yu
Evolutionary algorithms (EAs) have found many successful real-world applications, where the optimization problems are often subject to a wide range of uncertainties.
no code implementations • 11 Oct 2018 • Chao Qian, Chao Bian, Yang Yu, Ke Tang, Xin Yao
In noisy evolutionary optimization, sampling is a common strategy to deal with noise.
no code implementations • WS 2018 • Mingxuan Wang, Li Gong, Wenhuan Zhu, Jun Xie, Chao Bian
We participated in the WMT 2018 shared news translation task on English↔Chinese language pair.
no code implementations • COLING 2018 • Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, Chao Bian
Neural machine translation with source-side attention have achieved remarkable performance.
no code implementations • 2 Nov 2017 • Chao Qian, Chao Bian, Wu Jiang, Ke Tang
We analyze the running time of the (1+1)-EA solving OneMax and LeadingOnes under bit-wise noise for the first time, and derive the ranges of the noise level for polynomial and super-polynomial running time bounds.