Search Results for author: Sen Jia

Found 30 papers, 9 papers with code

Knowledge-driven deep learning for fast MR imaging: undersampled MR image reconstruction from supervised to un-supervised learning

no code implementations5 Feb 2024 Shanshan Wang, Ruoyou Wu, Sen Jia, Alou Diakite, Cheng Li, Qiegen Liu, Leslie Ying

The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods.

Image Reconstruction Image Restoration

Physics-Informed DeepMRI: Bridging the Gap from Heat Diffusion to k-Space Interpolation

no code implementations30 Aug 2023 Zhuo-Xu Cui, Congcong Liu, Xiaohong Fan, Chentao Cao, Jing Cheng, Qingyong Zhu, Yuanyuan Liu, Sen Jia, Yihang Zhou, Haifeng Wang, Yanjie Zhu, Jianping Zhang, Qiegen Liu, Dong Liang

In order to enhance interpretability and overcome the acceleration limitations, this paper introduces an interpretable framework that unifies both $k$-space interpolation techniques and image-domain methods, grounded in the physical principles of heat diffusion equations.

SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for Accelerated MRI

no code implementations11 Apr 2023 Zhuo-Xu Cui, Chentao Cao, Jing Cheng, Sen Jia, Hairong Zheng, Dong Liang, Yanjie Zhu

Diffusion models are a leading method for image generation and have been successfully applied in magnetic resonance imaging (MRI) reconstruction.

Image Generation MRI Reconstruction

K-UNN: k-Space Interpolation With Untrained Neural Network

1 code implementation11 Aug 2022 Zhuo-Xu Cui, Sen Jia, Qingyong Zhu, Congcong Liu, Zhilang Qiu, Yuanyuan Liu, Jing Cheng, Haifeng Wang, Yanjie Zhu, Dong Liang

Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data.

Image Reconstruction

SelfCoLearn: Self-supervised collaborative learning for accelerating dynamic MR imaging

no code implementations8 Aug 2022 Juan Zou, Cheng Li, Sen Jia, Ruoyou Wu, Tingrui Pei, Hairong Zheng, Shanshan Wang

Lately, deep learning has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved.

Data Augmentation Image Reconstruction

Attention mechanism-based generative adversarial networks for cloud removal in Landsat images

no code implementations Remote Sensing of Environment 2022 Meng Xu, Furong Deng, Sen Jia, Xiuping Jia, Antonio J. Plaza

First, attention maps of the input cloudy images are generated to extract the cloud distributions and features through an attentive recurrent network.

Cloud Removal

Multiscale Convolutional Transformer with Center Mask Pretraining for Hyperspectral Image Classification

no code implementations9 Mar 2022 Sen Jia, Yifan Wang

However, CNN-based methods are difficult to capture long-range dependencies, and also require a large amount of labeled data for model training. Besides, most of the self-supervised training methods in the field of HSI classification are based on the reconstruction of input samples, and it is difficult to achieve effective use of unlabeled samples.

Classification Hyperspectral Image Classification

Equilibrated Zeroth-Order Unrolled Deep Networks for Accelerated MRI

no code implementations18 Dec 2021 Zhuo-Xu Cui, Jing Cheng, Qingyong Zhu, Yuanyuan Liu, Sen Jia, Kankan Zhao, Ziwen Ke, Wenqi Huang, Haifeng Wang, Yanjie Zhu, Dong Liang

Specifically, focusing on accelerated MRI, we unroll a zeroth-order algorithm, of which the network module represents the regularizer itself, so that the network output can be still covered by the regularization model.

MRI Reconstruction Rolling Shutter Correction

A Survey: Deep Learning for Hyperspectral Image Classification with Few Labeled Samples

1 code implementation3 Dec 2021 Sen Jia, Shuguo Jiang, Zhijie Lin, Nanying Li, Meng Xu, Shiqi Yu

In general, deep learning models often contain many trainable parameters and require a massive number of labeled samples to achieve optimal performance.

Active Learning Few-Shot Learning +2

Simpler Does It: Generating Semantic Labels with Objectness Guidance

no code implementations20 Oct 2021 Md Amirul Islam, Matthew Kowal, Sen Jia, Konstantinos G. Derpanis, Neil D. B. Bruce

Extensive experiments demonstrate the high quality of our generated pseudo-labels and effectiveness of the proposed framework in a variety of domains.

Multi-Task Learning Object +1

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs

1 code implementation ICCV 2021 Md Amirul Islam, Matthew Kowal, Sen Jia, Konstantinos G. Derpanis, Neil D. B. Bruce

In this paper, we challenge the common assumption that collapsing the spatial dimensions of a 3D (spatial-channel) tensor in a convolutional neural network (CNN) into a vector via global pooling removes all spatial information.

Data Augmentation Position +2

Deep Amended Gradient Descent for Efficient Spectral Reconstruction from Single RGB Images

1 code implementation12 Aug 2021 Zhiyu Zhu, Hui Liu, Junhui Hou, Sen Jia, Qingfu Zhang

Then, we design a lightweight neural network with a multi-stage architecture to mimic the formed amended gradient descent process, in which efficient convolution and novel spectral zero-mean normalization are proposed to effectively extract spatial-spectral features for regressing an initialization, a basic gradient, and an incremental gradient.

Spectral Reconstruction

Multi-Attention Generative Adversarial Network for Remote Sensing Image Super-Resolution

no code implementations14 Jul 2021 Meng Xu, Zhihao Wang, Jiasong Zhu, Xiuping Jia, Sen Jia

The main body of the generator contains two blocks; one is the pyramidal convolution in the residual-dense block (PCRDB), and the other is the attention-based upsample (AUP) block.

Generative Adversarial Network Image Super-Resolution

Shape or Texture: Understanding Discriminative Features in CNNs

no code implementations27 Jan 2021 Md Amirul Islam, Matthew Kowal, Patrick Esser, Sen Jia, Bjorn Ommer, Konstantinos G. Derpanis, Neil Bruce

Contrasting the previous evidence that neurons in the later layers of a Convolutional Neural Network (CNN) respond to complex object shapes, recent studies have shown that CNNs actually exhibit a `texture bias': given an image with both texture and shape cues (e. g., a stylized image), a CNN is biased towards predicting the category corresponding to the texture.

Boundary Effects in CNNs: Feature or Bug?

no code implementations1 Jan 2021 Md Amirul Islam, Matthew Kowal, Sen Jia, Konstantinos G. Derpanis, Neil Bruce

Finally, we demonstrate the implications of these findings on a number of real-world tasks to show that position information can act as a feature or a bug.

Position

Shape or Texture: Disentangling Discriminative Features in CNNs

no code implementations ICLR 2021 Md Amirul Islam, Matthew Kowal, Patrick Esser, Sen Jia, Björn Ommer, Konstantinos G. Derpanis, Neil Bruce

Contrasting the previous evidence that neurons in the later layers of a Convolutional Neural Network (CNN) respond to complex object shapes, recent studies have shown that CNNs actually exhibit a 'texture bias': given an image with both texture and shape cues (e. g., a stylized image), a CNN is biased towards predicting the category corresponding to the texture.

Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications

no code implementations9 Nov 2020 Qing Li, Jiasong Zhu, Jun Liu, Rui Cao, Qingquan Li, Sen Jia, Guoping Qiu

Despite the rapid progress in this topic, there are lacking of a comprehensive review, which is needed to summarize the current progress and provide the future directions.

Depth Prediction Indoor Localization +2

Deep Low-rank plus Sparse Network for Dynamic MR Imaging

1 code implementation26 Oct 2020 Wenqi Huang, Ziwen Ke, Zhuo-Xu Cui, Jing Cheng, Zhilang Qiu, Sen Jia, Leslie Ying, Yanjie Zhu, Dong Liang

However, the selection of the parameters of L+S is empirical, and the acceleration rate is limited, which are common failings of iterative compressed sensing MR imaging (CS-MRI) reconstruction methods.

MRI Reconstruction

Revisiting Saliency Metrics: Farthest-Neighbor Area Under Curve

1 code implementation CVPR 2020 Sen Jia, Neil D. B. Bruce

Our experiment shows FN-AUC can measure spatial biases, central and peripheral, more effectively than S-AUC without penalizing the fixation locations.

Quantization Saliency Detection

How Much Position Information Do Convolutional Neural Networks Encode?

1 code implementation ICLR 2020 Md Amirul Islam, Sen Jia, Neil D. B. Bruce

In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent.

Position

Richer and Deeper Supervision Network for Salient Object Detection

no code implementations8 Jan 2019 Sen Jia, Neil D. B. Bruce

Recent Salient Object Detection (SOD) systems are mostly based on Convolutional Neural Networks (CNNs).

Object object-detection +2

DIMENSION: Dynamic MR Imaging with Both K-space and Spatial Prior Knowledge Obtained via Multi-Supervised Network Training

no code implementations30 Sep 2018 Shan-Shan Wang, Ziwen Ke, Huitao Cheng, Sen Jia, Ying Leslie, Hairong Zheng, Dong Liang

Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time.

Image Reconstruction

Right for the Right Reason: Training Agnostic Networks

no code implementations16 Jun 2018 Sen Jia, Thomas Lansdall-Welfare, Nello Cristianini

We consider the problem of a neural network being requested to classify images (or other inputs) without making implicit use of a "protected concept", that is a concept that should not play any role in the decision of the network.

Domain Adaptation

EML-NET:An Expandable Multi-Layer NETwork for Saliency Prediction

no code implementations2 May 2018 Sen Jia, Neil D. B. Bruce

Furthermore, the encoder can contain more than one CNN model to extract features, and models can have different architectures or be pre-trained on different datasets.

Saliency Prediction Scene Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.