no code implementations • 11 Jul 2024 • Jinxiu Liang, Bohan Yu, Yixin Yang, Yiming Han, Boxin Shi
Event cameras, mimicking the human retina, capture brightness changes with unparalleled temporal resolution and dynamic range.
1 code implementation • 9 Apr 2024 • Yixin Yang, Jiangxin Dong, Jinhui Tang, Jinshan Pan
To explore this property for better spatial and temporal feature utilization, we develop a local attention module to aggregate the features from adjacent frames in a spatial-temporal neighborhood.
1 code implementation • 17 Feb 2024 • Yixin Yang, Zheng Li, Qingxiu Dong, Heming Xia, Zhifang Sui
Understanding the deep semantics of images is essential in the era dominated by social media.
no code implementations • CVPR 2024 • Yixin Yang, Jinxiu Liang, Bohan Yu, Yan Chen, Jimmy S. Ren, Boxin Shi
Event cameras with their high temporal resolution dynamic range and low power consumption are particularly good at time-sensitive applications like deblurring and frame interpolation.
no code implementations • 13 Dec 2023 • Peiqi Duan, Boyu Li, Yixin Yang, Hanyue Lou, Minggui Teng, Yi Ma, Boxin Shi
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
1 code implementation • CVPR 2023 • Yixin Yang, Jin Han, Jinxiu Liang, Imari Sato, Boxin Shi
Limited by the trade-off between frame rate and exposure time when capturing moving scenes with conventional cameras, frame based HDR video reconstruction suffers from scene-dependent exposure ratio balancing and ghosting artifacts.
no code implementations • CVPR 2023 • Hanyue Lou, Minggui Teng, Yixin Yang, Boxin Shi
Given an RGB image focused at an arbitrary distance, we explore the high temporal resolution of event streams, from which we automatically select refocusing timestamps and reconstruct corresponding refocused images with events to form a focal stack.
1 code implementation • ICCV 2023 • Jinxiu Liang, Yixin Yang, Boyu Li, Peiqi Duan, Yong Xu, Boxin Shi
With frame-based cameras, capturing fast-moving scenes without suffering from blur often comes at the cost of low SNR and low contrast.
no code implementations • 5 Dec 2022 • Yixin Yang, Zhongzheng Peng, Xiaoyu Du, Zhulin Tao, Jinhui Tang, Jinshan Pan
To overcome this problem, we further develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process for better performance.
no code implementations • ICCV 2021 • Jin Han, Yixin Yang, Chu Zhou, Chao Xu, Boxin Shi
To reconstruct high-resolution intensity images from event data, we propose EvIntSR-Net that converts event data to multiple latent intensity frames to achieve super-resolution on intensity images in this paper.
no code implementations • 23 Nov 2020 • Abel Díaz Berenguer, Hichem Sahli, Boris Joukovsky, Maryna Kvasnytsia, Ine Dirks, Mitchel Alioscha-Perez, Nikos Deligiannis, Panagiotis Gonidakis, Sebastián Amador Sánchez, Redona Brahimetaj, Evgenia Papavasileiou, Jonathan Cheung-Wai Chana, Fei Li, Shangzhen Song, Yixin Yang, Sofie Tilborghs, Siri Willems, Tom Eelbode, Jeroen Bertels, Dirk Vandermeulen, Frederik Maes, Paul Suetens, Lucas Fidon, Tom Vercauteren, David Robben, Arne Brys, Dirk Smeets, Bart Ilsen, Nico Buls, Nina Watté, Johan de Mey, Annemiek Snoeckx, Paul M. Parizel, Julien Guiot, Louis Deprez, Paul Meunier, Stefaan Gryspeerdt, Kristof De Smet, Bart Jansen, Jef Vandemeulebroucke
Our motivating application is a real-world problem: COVID-19 classification from CT imaging, for which we present an explainable Deep Learning approach based on a semi-supervised classification pipeline that employs variational autoencoders to extract efficient feature embedding.