no code implementations • 17 Nov 2023 • Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni
Despite the popularity of the `pre-train then fine-tune' paradigm in the NLP community, existing work quantifying energy costs and associated carbon emissions has largely focused on language model pre-training.
no code implementations • 8 Jun 2022 • Fangxin Shang, Yehui Yang, Dalu Yang, Junde Wu, Xiaorong Wang, Yanwu Xu
Pre-training is essential to deep learning model performance, especially in medical image analysis tasks where limited training data are available.
no code implementations • 31 May 2022 • Wenshuo Zhou, Dalu Yang, Binghong Wu, Yehui Yang, Junde Wu, Xiaorong Wang, Lei Wang, Haifeng Huang, Yanwu Xu
Deep learning based medical imaging classification models usually suffer from the domain shift problem, where the classification performance drops when training data and real-world data differ in imaging equipment manufacturer, image acquisition protocol, patient populations, etc.
1 code implementation • 16 May 2022 • Fangxin Shang, Siqi Wang, Xiaorong Wang, Yehui Yang
Nearly all the top solutions rely on 2D convolutional networks and sequential models (Bidirectional GRU or LSTM) to extract intra-slice and inter-slice features, respectively.
1 code implementation • 15 Sep 2021 • Binghong Wu, Yehui Yang, Dalu Yang, Junde Wu, Xiaorong Wang, Haifeng Huang, Lei Wang, Yanwu Xu
Based on focal loss with ATSS-R50, our approach achieves 40. 5 AP, surpassing the state-of-the-art QFL (Quality Focal Loss, 39. 9 AP) and VFL (Varifocal Loss, 40. 1 AP).