1 code implementation • 3 Apr 2024 • Woo Kyoung Han, Sunghoon Im, Jaedeok Kim, Kyong Hwan Jin
We propose a practical approach to JPEG image decoding, utilizing a local implicit neural representation with continuous cosine formulation.
2 code implementations • 26 Mar 2024 • Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin, Seungryong Kim
These techniques are often not applicable in unconditional generation or in various downstream tasks such as image restoration.
no code implementations • 20 Sep 2023 • Minsu Kim, Giseop Kim, Kyong Hwan Jin, Sunwook Choi
The method boosts the learning of depth estimation of the camera branch and induces accurate location of dense camera features in BEV space.
1 code implementation • 4 Sep 2023 • Minsu Kim, Jaewon Lee, Byeonghun Lee, Sunghoon Im, Kyong Hwan Jin
Existing frameworks for image stitching often provide visually reasonable stitchings.
1 code implementation • 4 Sep 2023 • Minsu Kim, Yongjun Lee, Woo Kyoung Han, Kyong Hwan Jin
Trendy suggestions for learning-based elastic warps enable the deep image stitchings to align images exposed to large parallax errors.
1 code implementation • CVPR 2023 • Woo Kyoung Han, Byeonghun Lee, Sang Hyun Park, Kyong Hwan Jin
Modern displays and contents support more than 8bits image and video.
1 code implementation • CVPR 2023 • Byeonghyun Pak, Jaewon Lee, Kyong Hwan Jin
Our network outperforms both a transformer-based reconstruction and an implicit Fourier representation method in almost upscaling factor, thanks to the positive constraint and compact support of the B-spline basis.
1 code implementation • 5 Jul 2022 • Jaewon Lee, Kwang Pyo Choi, Kyong Hwan Jin
In this paper, we propose a local texture estimator for image warping (LTEW) followed by an implicit neural representation to deform images into continuous shapes.
1 code implementation • CVPR 2022 • Jaewon Lee, Kyong Hwan Jin
Recent works with an implicit neural function shed light on representing images in arbitrary resolution.
Ranked #6 on Image Super-Resolution on Set5 - 3x upscaling
no code implementations • 25 Nov 2019 • Teaghan O'Briain, Kyong Hwan Jin, Hongyoon Choi, Erika Chin, Magdalena Bazalova-Carter, Kwang Moo Yi
We aim to reduce the tedious nature of developing and evaluating methods for aligning PET-CT scans from multiple patient visits.
1 code implementation • 3 Oct 2019 • Jaejun Yoo, Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, Michael Unser
The key ingredients of our method are threefold: 1) a fixed low-dimensional manifold that encodes the temporal variations of images; 2) a network that maps the manifold into a more expressive latent space; and 3) a convolutional neural network that generates a dynamic series of MRI images from the latent variables and that favors their consistency with the measurements in k-space.
no code implementations • 14 Jan 2019 • Kyong Hwan Jin, Michael Unser, Kwang Moo Yi
The reconstruction network is trained to give the highest reconstruction quality, given the MCTS sampling pattern.
2 code implementations • 11 Oct 2017 • Michael T. McCann, Kyong Hwan Jin, Michael Unser
In this survey paper, we review recent uses of convolution neural networks (CNNs) to solve inverse problems in imaging.
no code implementations • 6 Sep 2017 • Harshit Gupta, Kyong Hwan Jin, Ha Q. Nguyen, Michael T. McCann, Michael Unser
When the projector is replaced with a CNN, we propose a relaxed PGD, which always converges.
no code implementations • 20 Apr 2017 • Hongyoon Choi, Kyong Hwan Jin
For effective treatment of Alzheimer disease (AD), it is important to identify subjects who are most likely to exhibit rapid cognitive decline.
no code implementations • 11 Nov 2016 • Kyong Hwan Jin, Michael T. McCann, Emmanuel Froustey, Michael Unser
The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution.
1 code implementation • 19 Oct 2015 • Kyong Hwan Jin, Jong Chul Ye
The new approach, what we call the robust ALOHA, is motivated by the observation that an image corrupted with impulse noises has intact pixels; so the impulse noises can be modeled as sparse components, whereas the underlying image can be still modeled using a low-rank Hankel structured matrix.