no code implementations • EMNLP 2020 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Junchi Yan, Peng Gao, Guotong Xie
In particular, a span encoder is trained to recover a random shuffling of tokens in a span, and a span pair encoder is trained to predict positive pairs that are from the same sentences and negative pairs that are from different sentences using contrastive loss.
no code implementations • NAACL (BioNLP) 2021 • Wei Zhu, Yilong He, Ling Chai, Yunxiao Fan, Yuan Ni, Guotong Xie, Xiaoling Wang
First a RoBERTa model is first applied to give a local ranking of the candidate sentences.
no code implementations • EMNLP 2021 • Wei Zhu, Xiaoling Wang, Yuan Ni, Guotong Xie
From this observation, we use mutual learning to improve BERT’s early exiting performances, that is, we ask each exit of a multi-exit BERT to distill knowledge from each other.
1 code implementation • 7 May 2023 • Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu, Yuan Ni, Guotong Xie, Xiaoling Wang, Xipeng Qiu
To train UDR, we cast various tasks' training signals into a unified list-wise ranking formulation by language model's feedback.
no code implementations • 26 Apr 2023 • Xiaorui Wang, Jun Wang, Xin Tang, Peng Gao, Rui Fang, Guotong Xie
Filter pruning is widely adopted to compress and accelerate the Convolutional Neural Networks (CNNs), but most previous works ignore the relationship between filters and channels in different layers.
no code implementations • 21 Oct 2022 • Jun Wang, Weixun Li, Changyu Hou, Xin Tang, Yixuan Qiao, Rui Fang, Pengyong Li, Peng Gao, Guotong Xie
Contrastive learning has emerged as a powerful tool for graph representation learning.
no code implementations • SemEval (NAACL) 2022 • Changyu Hou, Jun Wang, Yixuan Qiao, Peng Jiang, Peng Gao, Guotong Xie, Qizhi Lin, Xiaopeng Wang, Xiandi Jiang, Benqi Wang, Qifeng Xiao
By assigning different weights to each model for different inputs, we adopted the Transformer layer to integrate the advantages of diverse models effectively.
Low Resource Named Entity Recognition named-entity-recognition +2
no code implementations • 18 May 2022 • Yixuan Qiao, Hao Chen, Jun Wang, Yongquan Lai, Tuozhen Liu, Xianbin Ye, Xin Tang, Rui Fang, Peng Gao, Wenfeng Xie, Guotong Xie
This paper describes the PASH participation in TREC 2021 Deep Learning Track.
no code implementations • 11 Mar 2022 • Yang Nan, Fengyi Li, Peng Tang, Guyue Zhang, Caihong Zeng, Guotong Xie, Zhihong Liu, Guang Yang
Recognition of glomeruli lesions is the key for diagnosis and treatment planning in kidney pathology; however, the coexisting glomerular structures such as mesangial regions exacerbate the difficulties of this task.
1 code implementation • Findings (ACL) 2022 • Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, Xipeng Qiu
Previous works usually adopt heuristic metrics such as the entropy of internal outputs to measure instance difficulty, which suffers from generalization and threshold-tuning.
no code implementations • 2 Mar 2022 • Xianbin Ye, Ziliang Li, Fei Ma, Zongbi Yi, Pengyong Li, Jun Wang, Peng Gao, Yixuan Qiao, Guotong Xie
Anti-cancer drug discoveries have been serendipitous, we sought to present the Open Molecular Graph Learning Benchmark, named CandidateDrug4Cancer, a challenging and realistic benchmark dataset to facilitate scalable, robust, and reproducible graph machine learning research for anti-cancer drug discovery.
no code implementations • 9 Dec 2021 • Jun Wang, Zhoujing Li, Yixuan Qiao, Qiming Qin, Peng Gao, Guotong Xie
This paper presents a novel superpixel based approach combining DNN and a modified segmentation method, to detect damaged buildings from VHR imagery.
no code implementations • 26 Oct 2021 • Pengyong Li, Jun Wang, Ziliang Li, Yixuan Qiao, Xianggen Liu, Fei Ma, Peng Gao, Seng Song, Guotong Xie
Self-supervised learning has gradually emerged as a powerful technique for graph representation learning.
no code implementations • 11 Oct 2021 • Xianghua Ye, Dazhou Guo, Chen-Kan Tseng, Jia Ge, Tsung-Min Hung, Ping-Ching Pai, Yanping Ren, Lu Zheng, Xinli Zhu, Ling Peng, Ying Chen, Xiaohua Chen, Chen-Yu Chou, Danni Chen, Jiaze Yu, Yuzhen Chen, Feiran Jiao, Yi Xin, Lingyun Huang, Guotong Xie, Jing Xiao, Le Lu, Senxiang Yan, Dakai Jin, Tsung-Ying Ho
252 institution-1 patients had a treatment planning-CT (pCT) and a pair of diagnostic FDG-PETCT; 354 patients from other 3 institutions had only pCT.
no code implementations • 23 Sep 2021 • Fengze Liu, Ke Yan, Adam Harrison, Dazhou Guo, Le Lu, Alan Yuille, Lingyun Huang, Guotong Xie, Jing Xiao, Xianghua Ye, Dakai Jin
In this work, we introduce a fast and accurate method for unsupervised 3D medical image registration.
no code implementations • 20 Sep 2021 • Dazhou Guo, Xianghua Ye, Jia Ge, Xing Di, Le Lu, Lingyun Huang, Guotong Xie, Jing Xiao, Zhongjie Liu, Ling Peng, Senxiang Yan, Dakai Jin
Lymph node station (LNS) delineation from computed tomography (CT) scans is an indispensable step in radiation oncology workflow.
no code implementations • 24 Jun 2021 • Yixuan Qiao, Hao Chen, Jun Wang, Yihao Chen, Xianbin Ye, Ziliang Li, Xianbiao Qi, Peng Gao, Guotong Xie
TextVQA requires models to read and reason about text in images to answer questions about them.
1 code implementation • ACL 2022 • Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei LI, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on Medical Relation Extraction on CMeIE
no code implementations • NAACL 2021 • Wei Zhu, Yuan Ni, Xiaoling Wang, Guotong Xie
In developing an online question-answering system for the medical domains, natural language inference (NLI) models play a central role in question matching and intention detection.
no code implementations • 5 May 2021 • YouBao Tang, Ke Yan, Jinzheng Cai, Lingyun Huang, Guotong Xie, Jing Xiao, JingJing Lu, Gigin Lin, Le Lu
PDNet learns comprehensive and representative deep image features for our tasks and produces more accurate results on both lesion segmentation and RECIST diameter prediction.
1 code implementation • Briefings in Bioinformatics 2021 • Pengyong Li, Jun Wang, Yixuan Qiao, Hao Chen, Yihuan Yu, Xiaojun Yao, Peng Gao, Guotong Xie, Sen Song
In MPG, we proposed a powerful GNN for modelling molecular graph named MolGNet, and designed an effective self-supervised strategy for pre-training the model at both the node and graph-level.
no code implementations • 3 May 2021 • YouBao Tang, Jinzheng Cai, Ke Yan, Lingyun Huang, Guotong Xie, Jing Xiao, JingJing Lu, Gigin Lin, Le Lu
Accurately segmenting a variety of clinically significant lesions from whole body computed tomography (CT) scans is a critical task on precision oncology imaging, denoted as universal lesion segmentation (ULS).
no code implementations • 29 Apr 2021 • Xiao-Yun Zhou, Bolin Lai, Weijian Li, Yirui Wang, Kang Zheng, Fakai Wang, ChiHung Lin, Le Lu, Lingyun Huang, Mei Han, Guotong Xie, Jing Xiao, Kuo Chang-Fu, Adam Harrison, Shun Miao
It first trains a DAG model on the labeled data and then fine-tunes the pre-trained model on the unlabeled data with a teacher-student SSL mechanism.
no code implementations • 12 Apr 2021 • Bowen Li, Xinping Ren, Ke Yan, Le Lu, Lingyun Huang, Guotong Xie, Jing Xiao, Dar-In Tai, Adam P. Harrison
Importantly, ADDLE does not expect multiple raters per image in training, meaning it can readily learn from data mined from hospital archives.
no code implementations • 24 Mar 2021 • Kang Zheng, Yirui Wang, XiaoYun Zhou, Fakai Wang, Le Lu, ChiHung Lin, Lingyun Huang, Guotong Xie, Jing Xiao, Chang-Fu Kuo, Shun Miao
Specifically, we propose a new semi-supervised self-training algorithm to train the BMD regression model using images coupled with DEXA measured BMDs and unlabeled images with pseudo BMDs.
no code implementations • 21 Dec 2020 • Pengyong Li, Jun Wang, Yixuan Qiao, Hao Chen, Yihuan Yu, Xiaojun Yao, Peng Gao, Guotong Xie, Sen Song
Here, we proposed a novel Molecular Pre-training Graph-based deep learning framework, named MPG, that leans molecular representations from large-scale unlabeled molecules.
no code implementations • 9 Dec 2020 • Jun Wang, Shaoguo Wen, Kaixing Chen, Jianghua Yu, Xin Zhou, Peng Gao, Changsheng Li, Guotong Xie
Active learning generally involves querying the most representative samples for human labeling, which has been widely studied in many fields such as image classification and object detection.
no code implementations • ACL 2021 • Wei Zhu, Xipeng Qiu, Yuan Ni, Guotong Xie
Ablation study demonstrates the necessity of our search space design and the effectiveness of our search method.
3 code implementations • 4 Sep 2020 • Wei Zhu, Xiaoling Wang, Xipeng Qiu, Yuan Ni, Guotong Xie
Though the transformer architectures have shown dominance in many natural language understanding tasks, there are still unsolved issues for the training of transformer models, especially the need for a principled way of warm-up which has shown importance for stable training of a transformer, as well as whether the task at hand prefer to scale the attention product or not.
no code implementations • WS 2019 • Xiepeng Li, Zhexi Zhang, Wei Zhu, Zheng Li, Yuan Ni, Peng Gao, Junchi Yan, Guotong Xie
We have experimented both (a) improving the fine-tuning of pre-trained language models on a task with a small dataset size, by leveraging datasets of similar tasks; and (b) incorporating the distributional representations of a KG onto the representations of pre-trained language models, via simply concatenation or multi-head attention.
no code implementations • WS 2019 • Wei Zhu, Xiaofeng Zhou, Keqiang Wang, Xun Luo, Xiepeng Li, Yuan Ni, Guotong Xie
Transfer learning from the NLI task to the RQE task is also experimented, which proves to be useful in improving the results of fine-tuning MT-DNN large.
no code implementations • 18 Apr 2019 • Ying Wang, Xiao Xu, Tao Jin, Xiang Li, Guotong Xie, Jian-Min Wang
In addition, for unordered medical activity set, existing medical RL methods utilize a simple pooling strategy, which would result in indistinguishable contributions among the activities for learning.
no code implementations • 31 Jul 2017 • Jing Mei, Eryu Xia, Xiang Li, Guotong Xie
Precision medicine requires the precision disease risk prediction models.