In particular, in the case of dynamic tomography, only a single projection at a single view angle may be available at a time, making the problem severely ill-posed.
In dynamic tomography the object undergoes changes while projections are being acquired sequentially in time.
The Cram\'er-Rao bound (CRB), a well-known lower bound on the performance of any unbiased parameter estimator, has been used to study a wide variety of problems.
Different from previous works, they incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it, and use a deep neural network architecture and cost function, both specifically tailored to the problem.
We believe that the proposed velocity filtering method has the potential to pave the way to clinical translation of ULM.
In this paper, we propose a supervised dimensionality reduction method that learns linear embeddings jointly for two feature vectors representing data of different modalities or data from distinct types of entities.
Furthermore, we relate the denoising performance improvement by combining multiple models, to the image model relationships.
The model could be pre-learned from datasets, or learned simultaneously with the reconstruction, i. e., blind CS (BCS).
A Generative Adversarial Network (GAN) with generator $G$ trained to model the prior of images has been shown to perform better than sparsity-based regularizers in ill-posed inverse problems.
Multichannel blind deconvolution is the problem of recovering an unknown signal $f$ and multiple unknown channels $x_i$ from convolutional measurements $y_i=x_i \circledast f$ ($i=1, 2,\dots, N$).
Recent works on adaptive sparse and on low-rank signal modeling have demonstrated their usefulness in various image / video processing applications.
Multichannel blind deconvolution is the problem of recovering an unknown signal $f$ and multiple unknown channels $x_i$ from their circular convolution $y_i=x_i \circledast f$ ($i=1, 2,\dots, N$).
We show that many existing transform and analysis sparse representations can be viewed as filter banks, thus linking the local properties of patch-based model to the global properties of a convolutional model.
We also show that our power iteration algorithms for BGPC compare favorably with competing algorithms in adversarial conditions, e. g., with noisy measurement or with a bad initial estimate.
Transform learning methods involve cheap computations and have been demonstrated to perform well in applications such as image denoising and medical image reconstruction.
In this work, we propose a novel video denoising method, based on an online tensor reconstruction scheme with a joint adaptive sparse and low-rank model, dubbed SALT.
Features based on sparse representation, especially using the synthesis dictionary model, have been heavily exploited in signal processing and computer vision.
In this work, we focus on blind compressed sensing (BCS), where the underlying sparse signal model is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the unknown model from highly undersampled measurements.
Natural signals and images are well-known to be approximately sparse in transform domains such as Wavelets and DCT.
Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictionary.