no code implementations • 25 Aug 2024 • Xin Zhang, Teodor Boyadzhiev, Jinglei Shi, Jufeng Yang
In this paper, we leverage image complexity as a prior for refining segmentation features to achieve accurate real-time semantic segmentation.
no code implementations • 30 Mar 2024 • Duosheng Chen, Shihao Zhou, Jinshan Pan, Jinglei Shi, Lishen Qu, Jufeng Yang
This attention module contains radial strip windows to reweight image features in the polar coordinate, which preserves more useful information in rotation and translation motion together for better recovering the sharp images.
1 code implementation • CVPR 2024 • Shihao Zhou, Duosheng Chen, Jinshan Pan, Jinglei Shi, Jufeng Yang
Meanwhile FRFN employs an enhance-and-ease scheme to eliminate feature redundancy in channels enhancing the restoration of clear latent images.
no code implementations • 12 Jul 2023 • Jinglei Shi, Yihong Xu, Christine Guillemot
Light field is a type of image data that captures the 3D scene information by recording light rays emitted from a scene at various orientations.
no code implementations • 13 Apr 2023 • Jinglei Shi, Yihong Xu, Christine Guillemot
Light fields are a type of image data that capture both spatial and angular scene information by recording light rays emitted by a scene from different orientations.
1 code implementation • CVPR 2023 • Xi Wang, Robin Courant, Jinglei Shi, Eric Marchand, Marc Christie
This paper presents JAWS, an optimization-driven approach that achieves the robust transfer of visual cinematic features from a reference in-the-wild video clip to a newly generated clip.
no code implementations • 30 Jul 2022 • Jinglei Shi, Christine Guillemot
While existing compression methods encode the set of light field sub-aperture images, our proposed method learns an implicit scene representation in the form of a Neural Radiance Field (NeRF), which also enables view synthesis.
no code implementations • 11 Mar 2021 • Zhaolin Xiao, Jinglei Shi, Xiaoran Jiang, Christine Guillemot
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
no code implementations • CVPR 2020 • Jinglei Shi, Xiaoran Jiang, Christine Guillemot
In this paper, we present a learning-based framework for light field view synthesis from a subset of input views.