no code implementations • 5 Jul 2023 • Saisai Ding, Jun Wang, Juncheng Li, Jun Shi
The PT is developed to reduce redundant instances in bags by integrating prototypical learning into the Transformer architecture.
no code implementations • 25 May 2023 • Saisai Ding, Juncheng Li, Jun Wang, Shihui Ying, Jun Shi
The key idea of MEGT is to adopt two independent Efficient Graph-based Transformer (EGT) branches to process the low-resolution and high-resolution patch embeddings (i. e., tokens in a Transformer) of WSIs, respectively, and then fuse these tokens via a multi-scale feature fusion module (MFFM).
no code implementations • 31 May 2022 • Jun Shi, Yuanming Zhang, Zheng Li, Xiangmin Han, Saisai Ding, Jun Wang, Shihui Ying
In this work, we propose a pseudo-data based self-supervised federated learning (FL) framework, named SSL-FT-BT, to improve both the diagnostic accuracy and generalization of CAD models.