Investigating Task-driven Latent Feasibility for Nonconvex Image Modeling

18 Oct 2019  ·  Risheng Liu, Pan Mu, Jian Chen, Xin Fan, Zhongxuan Luo ·

Properly modeling latent image distributions plays an important role in a variety of image-related vision problems. Most exiting approaches aim to formulate this problem as optimization models (e.g., Maximum A Posterior, MAP) with handcrafted priors. In recent years, different CNN modules are also considered as deep priors to regularize the image modeling process. However, these explicit regularization techniques require deep understandings on the problem and elaborately mathematical skills. In this work, we provide a new perspective, named Task-driven Latent Feasibility (TLF), to incorporate specific task information to narrow down the solution space for the optimization-based image modeling problem. Thanks to the flexibility of TLF, both designed and trained constraints can be embedded into the optimization process. By introducing control mechanisms based on the monotonicity and boundedness conditions, we can also strictly prove the convergence of our proposed inference process. We demonstrate that different types of image modeling problems, such as image deblurring and rain streaks removals, can all be appropriately addressed within our TLF framework. Extensive experiments also verify the theoretical results and show the advantages of our method against existing state-of-the-art approaches.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here