no code implementations • CCL 2022 • Yujiao Han, Zhiyong Luo, Mingming Zhang, Zhilin Zhao, Qing Zhang
“机器阅读理解(Machine Reading Comprehension, MRC)任务旨在让机器回答给定上下文的问题来测试机器理解自然语言的能力。目前, 基于大规模预训练语言模型的神经机器阅读理解模型已经取得重要进展, 但在涉及答案要素、线索要素和问题要素跨标点句、远距离关联时, 答案抽取的准确率还有待提升。本文通过篇章内话头话体结构分析, 建立标点句间远距离关联关系、补全共享缺失成分, 辅助机器阅读理解答案抽取;设计和实现融合话头话体结构信息的机器阅读理解模型, 在公开数据集CMRC2018上的实验结果表明, 模型的F1值相对于基线模型提升2. 4%, EM值提升6%。”
no code implementations • CCL 2022 • Zhiyong Luo, Mingming Zhang, Yujiao Han, Zhilin Zhao
“分词是中文信息处理的基础任务之一。目前全监督中文分词技术已相对成熟并在通用领域取得较好效果, 但全监督方法存在依赖大规模标注语料且领域迁移能力差的问题, 特别是跨领域未登录词识别性能不佳。为缓解上述问题, 本文提出了一种充分利用相对易得的目标领域无标注文本、实现跨领域迁移的半监督中文分词框架;并设计实现了基于词记忆网络和序列条件熵的半监督权杒杆中文分词模型。实验结果表明本该模型在多个领域数据集上杆札值和杒杏杏杖值分别取得最高朲. 朳朵朥和朱朲. 朱朲朥的提升, 并在多个数据集上成为当前好结果。”
no code implementations • 8 Oct 2024 • Hui Chen, Xuhui Fan, Hengyu Liu, Yaqiong Li, Zhilin Zhao, Feng Zhou, Christopher John Quinn, Longbing Cao
Temporal point processes (TPPs) are effective for modeling event occurrences over time, but they struggle with sparse and uncertain events in federated systems, where privacy is a major concern.
1 code implementation • 24 May 2024 • Zhangkai Wu, Xuhui Fan, Jin Li, Zhilin Zhao, Hui Chen, Longbing Cao
Specifically, ParamReL proposes a \emph{self-}encoder to learn latent semantics directly from parameters, rather than from observations.
1 code implementation • 3 Mar 2024 • Kun-Yu Lin, Henghui Ding, Jiaming Zhou, Yu-Ming Tang, Yi-Xing Peng, Zhilin Zhao, Chen Change Loy, Wei-Shi Zheng
To answer this, we establish a CROSS-domain Open-Vocabulary Action recognition benchmark named XOV-Action, and conduct a comprehensive evaluation of five state-of-the-art CLIP-based video learners under various types of domain gaps.
1 code implementation • 14 Nov 2023 • Zhilin Zhao, Longbing Cao, Yixuan Zhang, Kun-Yu Lin, Wei-Shi Zheng
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available, given a standard network.
1 code implementation • 19 Jun 2022 • Zhilin Zhao, Longbing Cao
To distinguish in- and out-of-distribution samples, Dual Representation Learning (DRL) makes out-of-distribution samples harder to have high-confidence predictions by exploring both strongly and weakly label-related information from in-distribution samples.
1 code implementation • 19 Jun 2022 • Zhilin Zhao, Longbing Cao, Kun-Yu Lin
We thus improve the discriminability of a pretrained network by finetuning it with out-of-distribution samples drawn from the cross-class vicinity distribution, where each out-of-distribution input corresponds to a complementary label.
no code implementations • 19 Jun 2022 • Zhilin Zhao, Longbing Cao, Kun-Yu Lin
To tackle this issue, several state-of-the-art methods include adding extra OOD samples to training and assign them with manually-defined labels.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 19 Jun 2022 • Zhilin Zhao, Longbing Cao, Chang-Dong Wang
We observe that both in- and out-of-distribution samples can almost invariably be ruled out from belonging to certain classes, aside from those corresponding to unreliable ground-truth labels.
1 code implementation • 12 Feb 2022 • Zhilin Zhao, Longbing Cao, Yuanyu Wan
MOOE learns static offline experts from offline intervals and maintains a dynamic online expert for the current online interval.
1 code implementation • 23 Aug 2021 • Zhilin Zhao, Longbing Cao, Kun-Yu Lin
According to the Shannon entropy, an energy-based implicit generator is inferred from a discriminator without extra training costs.