no code implementations • Findings (NAACL) 2022 • Jiarun Wu, Qingliang Chen, Zeguan Xiao, Yuliang Gu, Mengsi Sun
Pre-trained language models have shown great success in multiple downstream tasks.
1 code implementation • ECCV 2020 • Runkai Zheng, Yinqi Zhang, Daolang Huang, Qingliang Chen
In recent years, Deep Neural Networks (DNN) have empowered Compressed Sensing (CS) substantially and have achieved high reconstruction quality and speed far exceeding traditional CS methods.
Ranked #1 on Image Compression on BSDS500
no code implementations • 13 Feb 2024 • Zhiyu Xu, Qingliang Chen
Glass-like objects can be seen everywhere in our daily life which are very hard for existing methods to segment them.
no code implementations • 29 Nov 2023 • Zihao Tan, Qingliang Chen, Yongjian Huang, Chen Liang
Most of the existing attack methods focus on inserting manually predefined templates as triggers in the pre-training phase to train the victim model and utilize the same triggers in the downstream task to perform inference, which tends to ignore the transferability and stealthiness of the templates.
1 code implementation • 10 Sep 2023 • Shuangqin Cheng, Qingliang Chen, Qiyi Zhang, Ming Li, Yamuhanmode Alike, Kaile Su, Pengcheng Wen
Computed Tomography (CT) is a medical imaging modality that can generate more informative 3D images than 2D X-rays.
no code implementations • 9 Jun 2023 • Zihao Tan, Qingliang Chen, Wenbin Zhu, Yongjian Huang
Prompt-based learning has been proved to be an effective way in pre-trained language models (PLMs), especially in low-resource scenarios like few-shot settings.
no code implementations • 8 Dec 2021 • Zhenxin Wu, Qingliang Chen, Yifeng Liu, Yinqi Zhang, Chengkai Zhu, Yang Yu
Finally, using the progressive training (P), the features extracted by the model in different stages can be fully utilized and fused with each other.
no code implementations • EMNLP 2021 • Zeguan Xiao, Jiarun Wu, Qingliang Chen, Congjian Deng
Graph-based Aspect-based Sentiment Classification (ABSC) approaches have yielded state-of-the-art results, expecially when equipped with contextual word embedding from pre-training language models (PLMs).
no code implementations • 29 Aug 2018 • Zongjie Ma, Abdul Sattar, Jun Zhou, Qingliang Chen, Kaile Su
Tabu Dropout has no extra parameters compared with the standard Dropout and also it is computationally cheap.
no code implementations • 18 Apr 2016 • Xiaowei Huang, Ji Ruan, Qingliang Chen, Kaile Su
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives.