no code implementations • 11 Jul 2023 • Yi Liao, Yongsheng Gao, Weichuan Zhang
However, all the CAM-based methods (e. g., CAM, Grad-CAM, and Relevance-CAM) can only be used for interpreting CNN models with fully-connected (FC) layers as a classifier.
no code implementations • 14 Feb 2022 • Yu Fu, Shunjie Dong, Yi Liao, Le Xue, Yuanfan Xu, Feng Li, Qianqian Yang, Tianbai Yu, Mei Tian, Cheng Zhuo
18F-fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) imaging usually needs a full-dose radioactive tracer to obtain satisfactory diagnostic results, which raises concerns about the potential health risks of radiation exposure, especially for pediatric patients.
no code implementations • 25 Sep 2021 • Yajie Sun, Miaohua Zhang, Xiaohan Yu, Yi Liao, Yongsheng Gao
Motivated by these issues, this paper proposes a novel compositional feature embedding and similarity metric (CECS).
no code implementations • ACL 2021 • Jie He, Bo Peng, Yi Liao, Qun Liu, Deyi Xiong
Each error is hence manually labeled with comprehensive annotations, including the span of the error, the associated span, minimal correction to the error, the type of the error, and rationale behind the error.
4 code implementations • 26 Apr 2021 • Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, Chen Li, Ziyan Gong, Yifan Yao, Xinjing Huang, Jun Wang, Jianfeng Yu, Qi Guo, Yue Yu, Yan Zhang, Jin Wang, Hengtao Tao, Dasen Yan, Zexuan Yi, Fang Peng, Fangqing Jiang, Han Zhang, Lingfeng Deng, Yehong Zhang, Zhe Lin, Chao Zhang, Shaojie Zhang, Mingyue Guo, Shanzhi Gu, Gaojun Fan, YaoWei Wang, Xuefeng Jin, Qun Liu, Yonghong Tian
To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.
Ranked #1 on Reading Comprehension (One-Shot) on DuReader
Cloze (multi-choices) (Few-Shot) Cloze (multi-choices) (One-Shot) +19
3 code implementations • ACL 2020 • Yi Liao, Xin Jiang, Qun Liu
Masked language model and autoregressive language model are two types of language models.
no code implementations • 9 Nov 2019 • Yinpeng Guo, Yi Liao, Xin Jiang, Qing Zhang, Yibo Zhang, Qun Liu
Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited.
10 code implementations • 31 Aug 2019 • Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, Qun Liu
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
2 code implementations • 29 Jun 2019 • Yi Liao, Yasheng Wang, Qun Liu, Xin Jiang
We present a simple yet effective method for generating high quality classical Chinese poetry with Generative Pre-trained Language Model (GPT).
1 code implementation • EMNLP 2018 • Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang
For example, an input sequence could be a word sequence, such as review sentence and advertisement text.
1 code implementation • EMNLP 2018 • Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang
For example, an input sequence could be a word sequence, such as review sentence and advertisement text.
no code implementations • IJCNLP 2015 • Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, Rebecca J. Passonneau
We propose an abstraction-based multi-document summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases.
no code implementations • 28 Apr 2015 • Piji Li, Lidong Bing, Wai Lam, Hang Li, Yi Liao
We propose a new MDS paradigm called reader-aware multi-document summarization (RA-MDS).