1 code implementation • CVPR 2023 • Tim Seizinger, Marcos V. Conde, Manuel Kolmet, Tom E. Bishop, Radu Timofte
Our method can render Bokeh from an all-in-focus image, or transform the Bokeh of one lens to the effect of another lens without harming the sharp foreground regions in the image.
1 code implementation • CVPRW 2023 • Marcos V. Conde, Manuel Kolmet, Tim Seizinger, Tom E. Bishop, Radu Timofte, Xiangyu Kong, Dafeng Zhang, Jinlong Wu, Fan Wang, Juewen Peng, Zhiyu Pan, Chengxin Liu, Xianrui Luo, Huiqiang Sun, Liao Shen, Zhiguo Cao, Ke Xian, Chaowei Liu, Zigeng Chen, Xingyi Yang, Songhua Liu, Yongcheng Jing, Michael Bi Mi, Xinchao Wang, Zhihao Yang, Wenyi Lian, Siyuan Lai, Haichuan Zhang, Trung Hoang, Amirsaeed Yazdani, Vishal Monga, Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, Thomas B. Schön, Yuxuan Zhao, Baoliang Chen, Yiqing Xu, JiXiang Niu
We present the new Bokeh Effect Transformation Dataset (BETD), and review the proposed solutions for this novel task at the NTIRE 2023 Bokeh Effect Transformation Challenge.
no code implementations • 24 Oct 2022 • Tzu-Jui Julius Wang, Jorma Laaksonen, Tomas Langer, Heikki Arponen, Tom E. Bishop
Moreover, in other V-L downstream tasks considered, our WFH models are on par with models trained with paired V-L data, revealing the utility of unpaired data.
8 code implementations • CVPR 2020 • Mohamed Yousef, Tom E. Bishop
On IAM we even surpass single line methods that use accurate localization information during training.
no code implementations • 11 May 2020 • Heikki Arponen, Tom E. Bishop
We address (ii) via a differentiable estimate of the KL divergence between network outputs and a binary target distribution, resulting in minimal information loss when the features are rounded to binary.
no code implementations • 12 Aug 2019 • Heikki Arponen, Tom E. Bishop
Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values.