1 code implementation • 16 Mar 2023 • Mingliang Dai, Zhizhong Huang, Jiaqi Gao, Hongming Shan, Junping Zhang
To alleviate the negative impact of noisy annotations, we propose a novel crowd counting model with one convolution head and one transformer head, in which these two heads can supervise each other in noisy areas, called Cross-Head Supervision.
no code implementations • 13 Mar 2023 • Zhizhong Huang, Junping Zhang, Hongming Shan
In this paper, we present TCL, a novel twin contrastive learning model to learn robust representations and handle noisy labels for classification.
Ranked #18 on
Image Classification
on mini WebVision 1.0
no code implementations • 21 Feb 2023 • Zhihao Chen, Chuang Niu, Ge Wang, Hongming Shan
Here, we propose to link in-plane and through-plane transformers for simultaneous in-plane denoising and through-plane deblurring, termed as LIT-Former, which can efficiently synergize in-plane and through-plane sub-tasks for 3D CT imaging and enjoy the advantages of both convolution and transformer networks.
no code implementations • 15 Jan 2023 • Yiming Lei, Zilong Li, Yangyang Li, Junping Zhang, Hongming Shan
However, the manifold of the resultant feature representations does not maintain the intrinsic ordinal relations of interest, which hinders the effectiveness of the image ordinal estimation.
1 code implementation • 14 Nov 2022 • Jiaxin Ye, Xin-Cheng Wen, Yujie Wei, Yong Xu, KunHong Liu, Hongming Shan
Specifically, TIM-Net first employs temporal-aware blocks to learn temporal affective representation, then integrates complementary information from the past and the future to enrich contextual representations, and finally, fuses multiple time scale features for better adaptation to the emotional variation.
Ranked #1 on
Speech Emotion Recognition
on EMOVO
no code implementations • 21 Oct 2022 • Jingqi Li, Jiaqi Gao, Yuzhen Zhang, Hongming Shan, Junping Zhang
Specifically, we first extract the motion features from the encoded motion sequences in the shallow layer.
no code implementations • 17 Oct 2022 • Zhizhong Huang, Junping Zhang, Hongming Shan
Extensive experimental results on five benchmark cross-age datasets demonstrate that MTLFace yields superior performance for both AIFR and FAS.
no code implementations • 24 Jul 2022 • Zilong Li, Qi Gao, Yaping Wu, Chuang Niu, Junping Zhang, Meiyun Wang, Ge Wang, Hongming Shan
The presence of high-density objects such as metal implants and dental fillings can introduce severely streak-like artifacts in computed tomography (CT) images, greatly limiting subsequent diagnosis.
1 code implementation • 9 May 2022 • Weiyi Yu, Zhizhong Huang, Junping Zhang, Hongming Shan
To tackle this issue, we introduce a self-adaptive normalization network, termed SAN-Net, to achieve adaptive generalization on unseen sites for stroke lesion segmentation.
no code implementations • 6 May 2022 • Jiaqi Gao, Jingqi Li, Hongming Shan, Yanyun Qu, James Z. Wang, Fei-Yue Wang, Junping Zhang
Crowd counting has important applications in public safety and pandemic control.
no code implementations • 29 Mar 2022 • Wenjun Xia, Hongming Shan, Ge Wang, Yi Zhang
Since 2016, deep learning (DL) has advanced tomographic imaging with remarkable successes, especially in low-dose computed tomography (LDCT) imaging.
no code implementations • 22 Mar 2022 • Rodrigo de Barros Vimieiro, Chuang Niu, Hongming Shan, Lucas Rodrigues Borges, Ge Wang, Marcelo Andrade da Costa Vieira
To accurately control the network operation point, in terms of noise and blur of the restored image, we propose a loss function that minimizes the bias and matches residual noise between the input and the output.
no code implementations • 15 Mar 2022 • Yiming Lei, Haiping Zhu, Junping Zhang, Hongming Shan
To improve model generalization with ordinal information, we propose a novel meta ordinal regression forest (MORF) method for medical image classification with ordinal labels, which learns the ordinal relationship through the combination of convolutional neural network and differential forest in a meta-learning framework.
1 code implementation • 16 Feb 2022 • Aaron Babier, Rafid Mahmood, Binghao Zhang, Victor G. L. Alves, Ana Maria Barragán-Montero, Joel Beaudry, Carlos E. Cardenas, Yankui Chang, Zijie Chen, Jaehee Chun, Kelly Diaz, Harold David Eraso, Erik Faustmann, Sibaji Gaj, Skylar Gay, Mary Gronberg, Bingqi Guo, Junjun He, Gerd Heilemann, Sanchit Hira, Yuliang Huang, Fuxin Ji, Dashan Jiang, Jean Carlo Jimenez Giraldo, Hoyeon Lee, Jun Lian, Shuolin Liu, Keng-Chi Liu, José Marrugo, Kentaro Miki, Kunio Nakamura, Tucker Netherton, Dan Nguyen, Hamidreza Nourzadeh, Alexander F. I. Osman, Zhao Peng, José Darío Quinto Muñoz, Christian Ramsl, Dong Joo Rhee, Juan David Rodriguez, Hongming Shan, Jeffrey V. Siebers, Mumtaz H. Soomro, Kay Sun, Andrés Usuga Hoyos, Carlos Valderrama, Rob Verbeek, Enpei Wang, Siri Willems, Qi Wu, Xuanang Xu, Sen yang, Lulin Yuan, Simeng Zhu, Lukas Zimmermann, Kevin L. Moore, Thomas G. Purdie, Andrea L. McNiven, Timothy C. Y. Chan
The dose predictions were input to four optimization models to form 76 unique KBP pipelines that generated 7600 plans.
1 code implementation • 23 Nov 2021 • Zhizhong Huang, Jie Chen, Junping Zhang, Hongming Shan
The strengths of ProPos are avoidable class collision issue, uniform representations, well-separated clusters, and within-cluster compactness.
Ranked #1 on
Image Clustering
on Imagenet-dog-15
1 code implementation • 12 Nov 2021 • Hongming Shan, Rodrigo de Barros Vimieiro, Lucas Rodrigues Borges, Marcelo Andrade da Costa Vieira, Ge Wang
Results showed that the perceptual loss function (PL4) is able to achieve virtually the same noise levels of a full-dose acquisition, while resulting in smaller signal bias compared to other loss functions.
1 code implementation • 24 Aug 2021 • Zhizhong Huang, Junping Zhang, Yi Zhang, Hongming Shan
To better regularize the LDCT denoising model, this paper proposes a novel method, termed DU-GAN, which leverages U-Net based discriminators in the GANs framework to learn both global and local difference between the denoised and normal-dose images in both image and gradient domains.
1 code implementation • 15 May 2021 • Zhizhong Huang, Shouzhen Chen, Junping Zhang, Hongming Shan
Age progression and regression aim to synthesize photorealistic appearance of a given face image with aging and rejuvenation effects, respectively.
1 code implementation • 27 Mar 2021 • Yiqun Liu, Yi Zeng, Jian Pu, Hongming Shan, Peiyang He, Junping Zhang
In this work, we propose a self-supervised gait recognition method, termed SelfGait, which takes advantage of the massive, diverse, unlabeled gait data as a pre-training process to improve the representation abilities of spatiotemporal backbones.
no code implementations • 24 Mar 2021 • Zexin Lu, Wenjun Xia, Yongqiang Huang, Hongming Shan, Hu Chen, Jiliu Zhou, Yi Zhang
Recent advance on neural network architecture search (NAS) has proved that the network architecture has a dramatic effect on the model performance, which indicates that current network architectures for LDCT may be sub-optimal.
1 code implementation • 17 Mar 2021 • Chuang Niu, Hongming Shan, Ge Wang
In this paper, we present a Semantic Pseudo-labeling-based Image ClustEring (SPICE) framework, which divides the clustering network into a feature model for measuring the instance-level similarity and a clustering head for identifying the cluster-level discrepancy.
Ranked #1 on
Image Clustering
on CIFAR-100
1 code implementation • CVPR 2021 • Zhizhong Huang, Junping Zhang, Hongming Shan
We further validate MTLFace on two popular general face recognition datasets, showing competitive performance for face recognition in the wild.
Ranked #1 on
Age-Invariant Face Recognition
on FG-NET
no code implementations • 1 Feb 2021 • Zhizhong Huang, Junping Zhang, Hongming Shan
Although impressive results have been achieved for age progression and regression, there remain two major issues in generative adversarial networks (GANs)-based methods: 1) conditional GANs (cGANs)-based methods can learn various effects between any two age groups in a single model, but are insufficient to characterize some specific patterns due to completely shared convolutions filters; and 2) GANs-based methods can, by utilizing several models to learn effects independently, learn some specific patterns, however, they are cumbersome and require age label in advance.
no code implementations • 31 Jan 2021 • Yiming Lei, Hongming Shan, Junping Zhang
In this paper, we propose a Meta Ordinal Weighting Network (MOW-Net) to explicitly align each training sample with a meta ordinal set (MOS) containing a few samples from all classes.
2 code implementations • 7 Dec 2020 • Zhizhong Huang, Shouzhen Chen, Junping Zhang, Hongming Shan
Although impressive results have been achieved with conditional generative adversarial networks (cGANs), the existing cGANs-based methods typically use a single network to learn various aging effects between any two different age groups.
no code implementations • 7 Dec 2020 • Yiming Lei, Haiping Zhu, Junping Zhang, Hongming Shan
Recently, an unsure data model (UDM) was proposed to incorporate those unsure nodules by formulating this problem as an ordinal regression, showing better performance over traditional binary classification.
2 code implementations • 16 Aug 2020 • Hanqing Chao, Hongming Shan, Fatemeh Homayounieh, Ramandeep Singh, Ruhani Doda Khera, Hengtao Guo, Timothy Su, Ge Wang, Mannudeep K. Kalra, Pingkun Yan
Cancer patients have a higher risk of cardiovascular disease (CVD) mortality than the general population.
no code implementations • 7 Aug 2020 • Haiping Zhu, Hongming Shan, Yuheng Zhang, Lingfu Che, Xiaoyang Xu, Junping Zhang, Jianbo Shi, Fei-Yue Wang
We propose a novel ordinal regression approach, termed Convolutional Ordinal Regression Forest or CORF, for image ordinal estimation, which can integrate ordinal regression and differentiable decision trees with a convolutional neural network for obtaining precise and stable global ordinal relationships.
no code implementations • 4 Aug 2020 • Weiwen Wu, Dianlin Hu, Wenxiang Cong, Hongming Shan, Shao-Yu Wang, Chuang Niu, Pingkun Yan, Hengyong Yu, Varut Vardhanabhuti, Ge Wang
ACID synergizes a deep reconstruction network trained on big data, kernel awareness from CS-inspired processing, and iterative refinement to minimize the data residual relative to real measurement.
no code implementations • 8 Jul 2020 • Chuang Niu, Wenxiang Cong, Fenglei Fan, Hongming Shan, Mengzhou Li, Jimin Liang, Ge Wang
Deep neural network based methods have achieved promising results for CT metal artifact reduction (MAR), most of which use many synthesized paired images for training.
no code implementations • 23 Jun 2020 • Qing Lyu, Hongming Shan, Yibin Xie, Debiao Li, Ge Wang
As compared to computed tomography (CT), MRI, however, requires a long scan time, which inevitably induces motion artifacts and causes patients' discomfort.
1 code implementation • 9 Dec 2019 • Huidong Xie, Hongming Shan, Wenxiang Cong, Chi Liu, Xiaohua Zhang, Shaohua Liu, Ruola Ning, Ge Wang
Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences.
no code implementations • 13 Nov 2019 • Huidong Xie, Hongming Shan, Ge Wang
Few-view CT image reconstruction is one of the main ways to minimize radiation dose and potentially allow a stationary CT architecture.
1 code implementation • 24 Oct 2019 • Haiping Zhu, Zhizhong Huang, Hongming Shan, Junping Zhang
Face aging is of great importance for cross-age recognition and entertainment-related applications.
1 code implementation • 13 Oct 2019 • Yu Gong, Hongming Shan, Yueyang Teng, Ning Tu, Ming Li, Guodong Liang, Ge Wang, Shan-Shan Wang
The contributions of this paper are twofold: i) a PT-WGAN framework is designed to denoise low-dose PET images without compromising structural details, and ii) a task-specific initialization based on transfer learning is developed to train PT-WGAN using trainable parameters transferred from a pretrained model, which significantly improves the training efficiency of PT-WGAN.
no code implementations • 25 Sep 2019 • Wenxiang Cong, Hongming Shan, Xiaohua Zhang, Shaohua Liu, Ruola Ning, Ge Wang
In this study, we propose a deep-learning-based method to establish a residual neural network model for the image reconstruction, which is applied for few-view breast CT to produce high quality breast CT images.
no code implementations • 5 Aug 2019 • Qing Lyu, Hongming Shan, Ge Wang
Our experimental results demonstrate that the proposed networks can produce MRI super-resolution images with good image quality and outperform other multi-contrast super-resolution methods in terms of structural similarity and peak signal-to-noise ratio.
no code implementations • 23 Jul 2019 • Hongming Shan, Christopher Wiedeman, Ge Wang, Yang Yang
Photoacoustic tomography seeks to reconstruct an acoustic initial pressure distribution from the measurement of the ultrasound waveforms.
no code implementations • 18 Jul 2019 • Yuan Cao, Qiuying Li, Hongming Shan, Zhizhong Huang, Lei Chen, Leiming Ma, Junping Zhang
Precipitation nowcasting, which aims to precisely predict the short-term rainfall intensity of a local region, is gaining increasing attention in the artificial intelligence community.
no code implementations • 6 Jul 2019 • Qing Lyu, Hongming Shan, Ge Wang
Finally, a convolutional neural network is used for ensemble learning that synergizes the outputs of GANs into the final MR super-resolution images.
no code implementations • 2 Jul 2019 • Huidong Xie, Hongming Shan, Wenxiang Cong, Xiaohua Zhang, Shaohua Liu, Ruola Ning, Ge Wang
Few-view CT image reconstruction is an important topic to reduce the radiation dose.
no code implementations • 27 May 2019 • Haiping Zhu, Yuheng Zhang, Guohao Li, Junping Zhang, Hongming Shan
This paper proposes an ordinal distribution regression with a global and local convolutional neural network for gait-based age estimation.
1 code implementation • 17 Jan 2019 • Fenglei Fan, Hongming Shan, Mannudeep K. Kalra, Ramandeep Singh, Guhan Qian, Matthew Getzin, Yueyang Teng, Juergen Hahn, Ge Wang
Inspired by complexity and diversity of biological neurons, our group proposed quadratic neurons by replacing the inner product in current artificial neurons with a quadratic operation on input data, thereby enhancing the capability of an individual neuron.
1 code implementation • 8 Nov 2018 • Hongming Shan, Atul Padole, Fatemeh Homayounieh, Uwe Kruger, Ruhani Doda Khera, Chayanin Nitiwarangkul, Mannudeep K. Kalra, Ge Wang
Here we design a novel neural network architecture for low-dose CT (LDCT) and compare it with commercial iterative reconstruction methods used for standard of care CT.
no code implementations • 30 Oct 2018 • Yiming Lei, Yukun Tian, Hongming Shan, Junping Zhang, Ge Wang, Mannudeep Kalra
Therefore, CAM and Grad-CAM cannot provide optimal interpretation for lung nodule categorization task in low-dose CT images, in that fine-grained pathological clues like discrete and irregular shape and margins of nodules are capable of enhancing sensitivity and specificity of nodule classification with regards to CNN.
no code implementations • 16 Oct 2018 • Qing Lyu, Chenyu You, Hongming Shan, Ge Wang
Magnetic resonance imaging (MRI) is extensively used for diagnosis and image-guided therapeutics.
Medical Physics
no code implementations • 10 Aug 2018 • Chenyu You, Guang Li, Yi Zhang, Xiaoliu Zhang, Hongming Shan, Shenghong Ju, Zhen Zhao, Zhuiyang Zhang, Wenxiang Cong, Michael W. Vannier, Punam K. Saha, Ge Wang
Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs.
no code implementations • 2 May 2018 • Chenyu You, Qingsong Yang, Hongming Shan, Lars Gjesteby, Guang Li, Shenghong Ju, Zhuiyang Zhang, Zhen Zhao, Yi Zhang, Wenxiang Cong, Ge Wang
However, the radiation dose reduction compromises the signal-to-noise ratio (SNR), leading to strong noise and artifacts that down-grade CT image quality.
no code implementations • 15 Feb 2018 • Hongming Shan, Yi Zhang, Qingsong Yang, Uwe Kruger, Mannudeep K. Kalra, Ling Sun, Wenxiang Cong, Ge Wang
Based on the transfer learning from 2D to 3D, the 3D network converges faster and achieves a better denoising performance than that trained from scratch.