no code implementations • 8 Feb 2023 • Gary Y. Li, Junyu Chen, Se-In Jang, Kuang Gong, Quanzheng Li
Inspired by the recent success of Vision Transformers and advances in multi-modal image analysis, we propose a novel segmentation model, debuted, Cross-Modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions. To validate the effectiveness of the proposed method, we performed experiments on the HECKTOR 2021 challenge dataset and compared it with the nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based methods such as UNETR, and Swin UNETR.
no code implementations • 21 Dec 2022 • Ye Li, Junyu Chen, Se-In Jang, Kuang Gong, Quanzheng Li
Inspired by the recent success of Transformers for Natural Language Processing and vision Transformer for Computer Vision, many researchers in the medical imaging community have flocked to Transformer-based networks for various main stream medical tasks such as classification, segmentation, and estimation.
no code implementations • 13 Sep 2022 • Kuang Gong, Keith A. Johnson, Georges El Fakhri, Quanzheng Li, Tinsu Pan
Regional and surface quantification shows that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference can achieve the best performance.
no code implementations • 7 Sep 2022 • Se-In Jang, Tinsu Pan, Ye Li, Pedram Heidari, Junyu Chen, Quanzheng Li, Kuang Gong
In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs.
no code implementations • 15 Mar 2022 • Ye Li, Jianan Cui, Junyu Chen, Guodong Zeng, Scott Wollenweber, Floris Jansen, Se-In Jang, Kyungsang Kim, Kuang Gong, Quanzheng Li
Our hypothesis is that by explicitly providing the local relative noise level of the input image to a deep convolutional neural network (DCNN), the DCNN can outperform itself trained on image appearance only.
no code implementations • 5 Jan 2022 • Siqi Li, Kuang Gong, Ramsey D. Badawi, Edward J. Kim, Jinyi Qi, Guobao Wang
In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network.
no code implementations • 18 Jun 2021 • Kuang Gong, Ciprian Catana, Jinyi Qi, Quanzheng Li
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework.
no code implementations • 14 Sep 2020 • Jianan Cui, Kuang Gong, Paul Han, Huafeng Liu, Quanzheng Li
After the network was trained, the super-resolution (SR) image was generated by supplying the upsampled LR ASL image and corresponding T1-weighted image to the generator of the last layer.
no code implementations • 13 Sep 2020 • Nuobei Xie, Kuang Gong, Ning Guo, Zhixing Qin, Jianan Cui, Zhifang Wu, Huafeng Liu, Quanzheng Li
Patlak model is widely used in 18F-FDG dynamic positron emission tomography (PET) imaging, where the estimated parametric images reveal important biochemical and physiology information.
no code implementations • 16 Dec 2019 • Nuobei Xie, Kuang Gong, Ning Guo, Zhixin Qin, Zhifang Wu, Huafeng Liu, Quanzheng Li
Positron emission tomography (PET) is widely used for clinical diagnosis.
no code implementations • 9 Jun 2019 • Dufan Wu, Kuang Gong, Kyungsang Kim, Quanzheng Li
In this paper we proposed a training method which learned denoising neural networks from noisy training samples only.
no code implementations • 4 Jul 2018 • Kuang Gong, Kyungsang Kim, Jianan Cui, Ning Guo, Ciprian Catana, Jinyi Qi, Quanzheng Li
The representation is expressed using a deep neural network with the patient's prior images as network input.
no code implementations • 17 Dec 2017 • Kuang Gong, Jaewon Yang, Kyungsang Kim, Georges El Fakhri, Youngho Seo, Quanzheng Li
With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods.
1 code implementation • 9 Oct 2017 • Kuang Gong, Jiahui Guan, Kyungsang Kim, Xuezhu Zhang, Georges El Fakhri, Jinyi Qi, Quanzheng Li
An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool.