LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of $R_2^\ast$ maps, while LEARN-BIO directly performs motion- and $B0$-inhomogeneity-corrected $R_2^\ast$ estimation.
Internet video delivery has undergone a tremendous explosion of growth over the past few years.
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets.
The plug-and-play priors (PnP) and regularization by denoising (RED) methods have become widely used for solving inverse problems by leveraging pre-trained deep denoisers as image priors.
Deep unfolding networks have recently gained popularity in the context of solving imaging inverse problems.
Graph Reasoning has shown great potential recently in modeling long-range dependencies, which are crucial for various computer vision tasks.
Cal-RED extends the traditional RED methodology to imaging problems that require the calibration of the measurement operator.
Deep learning-based image denoising approaches have been extensively studied in recent years, prevailing in many public benchmark datasets.
Regularization by denoising (RED) is a recently developed framework for solving inverse problems by integrating advanced denoisers as image priors.
One of the key limitations in conventional deep learning based image reconstruction is the need for registered pairs of training images containing a set of high-quality groundtruth images.
Plug-and-play priors (PnP) is a methodology for regularized image reconstruction that specifies the prior through an image denoiser.
To further promote the research of ship detection, we introduced a new fine-grained ship detection datasets, which is named as FGSD.
We introduce a new algorithm for regularized reconstruction of multispectral (MS) images from noisy linear measurements.
Extracting entity from images is a crucial part of many OCR applications, such as entity recognition of cards, invoices, and receipts.
Most existing text reading benchmarks make it difficult to evaluate the performance of more advanced deep learning models in large vocabularies due to the limited amount of training data.
Most previous image matting methods require a roughly-specificed trimap as input, and estimate fractional alpha values for all pixels that are in the unknown region of the trimap.
Specifically, we propose an end-to-end trainable style retention network (SRNet) that consists of three modules: text conversion module, background inpainting module and fusion module.
In this paper we present a new data-driven method for robust skin detection from a single human portrait image.
In this work, we develop a new block coordinate RED algorithm that decomposes a large-scale estimation problem into a sequence of updates over a small subset of the unknown variables.
Compared with image inpainting, performing this task on video presents new challenges such as how to preserving temporal consistency and spatial details, as well as how to handle arbitrary input video size and length fast and efficiently.
In this paper, we present new data pre-processing and augmentation techniques for DNN-based raw image denoising.
However, text in the wild is usually perspectively distorted or curved, which can not be easily tackled by existing approaches.
Reading text from images remains challenging due to multi-orientation, perspective distortion and especially the curved nature of irregular text.
In the past decade, sparsity-driven regularization has led to significant improvements in image reconstruction.