1 code implementation • 20 Apr 2025 • Zheng Chen, Kai Liu, Jue Gong, Jingkai Wang, Lei Sun, Zongwei Wu, Radu Timofte, Yulun Zhang, Xiangyu Kong, Xiaoxuan Yu, Hyunhee Park, Suejin Han, Hakjae Jeon, Dafeng Zhang, Hyung-Ju Chun, Donghun Ryou, Inju Ha, Bohyung Han, Lu Zhao, Yuyi Zhang, Pengyu Yan, Jiawei Hu, Pengwei Liu, Fengjun Guo, Hongyuan Yu, Pufan Xu, Zhijuan Huang, Shuyuan Cui, Peng Guo, Jiahui Liu, Dongkai Zhang, Heng Zhang, Huiyuan Fu, Huadong Ma, Yanhui Guo, Sisi Tian, Xin Liu, Jinwen Liang, Jie Liu, Jie Tang, Gangshan Wu, Zeyu Xiao, Zhuoyuan Li, Yinxiang Zhang, Wenxuan Cai, Vijayalaxmi Ashok Aralikatti, Nikhil Akalwadi, G Gyaneshwar Rao, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudenagudi, Marcos V. Conde, Alejandro Merino, Bruno Longarela, Javier Abad, Weijun Yuan, Zhan Li, Zhanglu Chen, Boyang Yao, Aagam Jain, Milan Kumar Singh, Ankit Kumar, Shubh Kawa, Divyavardhan Singh, Anjali Sarvaiya, Kishor Upla, Raghavendra Ramachandra, Chia-Ming Lee, Yu-Fan Lin, Chih-Chung Hsu, Risheek V Hiremath, Yashaswini Palani, YuXuan Jiang, Qiang Zhu, Siyue Teng, Fan Zhang, Shuyuan Zhu, Bing Zeng, David Bull, Jingwei Liao, Yuqing Yang, Wenda Shao, Junyi Zhao, Qisheng Xu, Kele Xu, Sunder Ali Khowaja, Ik Hyun Lee, Snehal Singh Tomar, Rajarshi Ray, Klaus Mueller, Sachin Chaudhary, Surya Vashisth, Akshay Dudhane, Praful Hambarde, Satya Naryan Tazi, Prashant Patil, Santosh Kumar Vipparthi, Subrahmanyam Murala, Bilel Benjdira, Anas M. Ali, Wadii Boulila, Zahra Moammeri, Ahmad Mahmoudi-Aznaveh, Ali Karbasi, Hossein Motamednia, Liangyan Li, Guanhua Zhao, Kevin Le, Yimo Ning, Haoxuan Huang, Jun Chen
This paper presents the NTIRE 2025 image super-resolution ($\times$4) challenge, one of the associated competitions of the 10th NTIRE Workshop at CVPR 2025.
1 code implementation • 17 Apr 2025 • Xin Li, Kun Yuan, Bingchen Li, Fengbin Guan, Yizhen Shao, Zihao Yu, Xijun Wang, Yiting Lu, Wei Luo, Suhang Yao, Ming Sun, Chao Zhou, Zhibo Chen, Radu Timofte, Yabin Zhang, Ao-Xiang Zhang, Tianwu Zhi, Jianzhao Liu, Yang Li, Jingwen Xu, Yiting Liao, Yushen Zuo, Mingyang Wu, Renjie Li, Shengyun Zhong, Zhengzhong Tu, Yufan Liu, Xiangguang Chen, Zuowei Cao, Minhao Tang, Shan Liu, Kexin Zhang, Jingfen Xie, Yan Wang, Kai Chen, Shijie Zhao, Yunchen Zhang, Xiangkai Xu, Hong Gao, Ji Shi, Yiming Bao, Xiugang Dong, Xiangsheng Zhou, Yaofeng Tu, Ying Liang, Yiwen Wang, Xinning Chai, Yuxuan Zhang, Zhengxue Cheng, Yingsheng Qin, Yucai Yang, Rong Xie, Li Song, Wei Sun, Kang Fu, Linhan Cao, Dandan Zhu, Kaiwei Zhang, Yucheng Zhu, ZiCheng Zhang, Menghan Hu, Xiongkuo Min, Guangtao Zhai, Zhi Jin, Jiawei Wu, Wei Wang, Wenjian Zhang, Yuhai Lan, Gaoxiong Yi, Hengyuan Na, Wang Luo, Di wu, MingYin Bai, Jiawang Du, Zilong Lu, Zhenyu Jiang, Hui Zeng, Ziguan Cui, Zongliang Gan, Guijin Tang, Xinglin Xie, Kehuan Song, Xiaoqiang Lu, Licheng Jiao, Fang Liu, Xu Liu, Puhua Chen, Ha Thu Nguyen, Katrien De Moor, Seyed Ali Amirshahi, Mohamed-Chaker Larabi, Qi Tang, Linfeng He, Zhiyong Gao, Zixuan Gao, Guohua Zhang, Zhiye Huang, Yi Deng, Qingmiao Jiang, Lu Chen, Yi Yang, Xi Liao, Nourine Mohammed Nadir, YuXuan Jiang, Qiang Zhu, Siyue Teng, Fan Zhang, Shuyuan Zhu, Bing Zeng, David Bull, Meiqin Liu, Chao Yao, Yao Zhao
This paper presents a review for the NTIRE 2025 Challenge on Short-form UGC Video Quality Assessment and Enhancement.
no code implementations • 16 Apr 2025 • Joanne Lin, Crispian Morris, Ruirui Lin, Fan Zhang, David Bull, Nantheera Anantrasirichai
Low-light conditions pose significant challenges for both human and machine annotation.
3 code implementations • 14 Apr 2025 • Bin Ren, Hang Guo, Lei Sun, Zongwei Wu, Radu Timofte, Yawei Li, Yao Zhang, Xinning Chai, Zhengxue Cheng, Yingsheng Qin, Yucai Yang, Li Song, Hongyuan Yu, Pufan Xu, Cheng Wan, Zhijuan Huang, Peng Guo, Shuyuan Cui, Chenjun Li, Xuehai Hu, Pan Pan, Xin Zhang, Heng Zhang, Qing Luo, Linyan Jiang, Haibo Lei, Qifang Gao, Yaqing Li, Weihua Luo, Tsing Li, Qing Wang, Yi Liu, Yang Wang, Hongyu An, Liou Zhang, Shijie Zhao, Lianhong Song, Long Sun, Jinshan Pan, Jiangxin Dong, Jinhui Tang, Jing Wei, Mengyang Wang, Ruilong Guo, Qian Wang, Qingliang Liu, Yang Cheng, Davinci, Enxuan Gu, Pinxin Liu, Yongsheng Yu, Hang Hua, Yunlong Tang, Shihao Wang, ZhiYu Zhang, Yukun Yang, Jiyu Wu, Jiancheng Huang, Yifan Liu, Yi Huang, Shifeng Chen, Rui Chen, Yi Feng, Mingxi Li, Cailu Wan, XiangJi Wu, Zibin Liu, Jinyang Zhong, Kihwan Yoon, Ganzorig Gankhuyag, Shengyun Zhong, Mingyang Wu, Renjie Li, Yushen Zuo, Zhengzhong Tu, Zongang Gao, Guannan Chen, Yuan Tian, Wenhui Chen, Weijun Yuan, Zhan Li, Yihang Chen, Yifan Deng, Ruting Deng, Yilin Zhang, Huan Zheng, Yanyan Wei, Wenxuan Zhao, Suiyi Zhao, Fei Wang, Kun Li, Yinggan Tang, Mengjie Su, Jae-Hyeon Lee, Dong-Hyeop Son, Ui-Jin Choi, Tiancheng Shao, Yuqing Zhang, Mengcheng Ma, Donggeun Ko, Youngsang Kwak, Jiun Lee, Jaehwa Kwak, YuXuan Jiang, Qiang Zhu, Siyue Teng, Fan Zhang, Shuyuan Zhu, Bing Zeng, David Bull, Jing Hu, Hui Deng, Xuan Zhang, Lin Zhu, Qinrui Fan, Weijian Deng, Junnan Wu, Wenqin Deng, Yuquan Liu, Zhaohong Xu, Jameer Babu Pinjari, Kuldeep Purohit, Zeyu Xiao, Zhuoyuan Li, Surya Vashisth, Akshay Dudhane, Praful Hambarde, Sachin Chaudhary, Satya Naryan Tazi, Prashant Patil, Santosh Kumar Vipparthi, Subrahmanyam Murala, Wei-Chen Shen, I-Hsiang Chen, Yunzhe Xu, Chen Zhao, Zhizhou Chen, Akram Khatami-Rizi, Ahmad Mahmoudi-Aznaveh, Alejandro Merino, Bruno Longarela, Javier Abad, Marcos V. Conde, Simone Bianco, Luca Cogo, Gianmarco Corti
This paper presents a comprehensive review of the NTIRE 2025 Challenge on Single-Image Efficient Super-Resolution (ESR).
no code implementations • 25 Mar 2025 • Ge Gao, Siyue Teng, Tianhao Peng, Fan Zhang, David Bull
While video compression based on implicit neural representations (INRs) has recently demonstrated great potential, existing INR-based video codecs still cannot achieve state-of-the-art (SOTA) performance compared to their conventional or autoencoder-based counterparts given the same coding configuration.
no code implementations • 17 Mar 2025 • YuXuan Jiang, Chengxi Zeng, Siyue Teng, Fan Zhang, Xiaoqing Zhu, Joel Sole, David Bull
Our approach is based on a two-stage training methodology and a hierarchical encoding mechanism.
1 code implementation • 24 Jan 2025 • Guoxi Huang, Nantheera Anantrasirichai, Fei Ye, Zipeng Qi, Ruirui Lin, Qirui Yang, David Bull
In image enhancement tasks, such as low-light and underwater image enhancement, a degraded image can correspond to multiple plausible target images due to dynamic photography conditions, such as variations in illumination.
no code implementations • 6 Jan 2025 • Nantheera Anantrasirichai, Fan Zhang, David Bull
This paper explores the significant technological shifts since our previous review in 2022, highlighting how these developments have expanded creative opportunities and efficiency.
no code implementations • 4 Dec 2024 • YuXuan Jiang, Ho Man Kwan, Tianhao Peng, Ge Gao, Fan Zhang, Xiaoqing Zhu, Joel Sole, David Bull
Recent advances in implicit neural representations (INRs) have shown significant promise in modeling visual signals for various low-vision tasks including image super-resolution (ISR).
no code implementations • 20 Nov 2024 • YuXuan Jiang, Jakub Nawała, Chen Feng, Fan Zhang, Xiaoqing Zhu, Joel Sole, David Bull
To address this issue, this paper proposes a low-complexity SR method, RTSR, designed to enhance the visual quality of compressed video content, focusing on resolution up-scaling from a) 360p to 1080p and from b) 540p to 4K.
1 code implementation • 17 Nov 2024 • Ge Gao, Adrian Azzarelli, Ho Man Kwan, Nantheera Anantrasirichai, Fan Zhang, Oliver Moolan-Feroze, David Bull
However, the development and validation of efficient 3D data compression methods are constrained by the lack of comprehensive and high-quality volumetric video datasets, which typically require much more effort to acquire and consume increased resources compared to 2D image and video databases.
1 code implementation • 2 Oct 2024 • Haoran Wang, Nantheera Anantrasirichai, Fan Zhang, David Bull
3D Gaussian splatting (3DGS) offers the capability to achieve real-time high quality 3D scene rendering.
no code implementations • 11 Sep 2024 • Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull
In this paper, rather than focusing on representation architectures as in many existing works, we propose a novel INR-based video compression framework, Neural Video Representation Compression (NVRC), targeting compression of the representation.
no code implementations • 3 Sep 2024 • Paul Hill, Nantheera Anantrasirichai, Alin Achim, David Bull
The influence of atmospheric turbulence on acquired imagery makes image interpretation and scene analysis extremely difficult and reduces the effectiveness of conventional approaches for classifying and tracking objects of interest in the scene.
no code implementations • 2 Sep 2024 • Ge Gao, Ho Man Kwan, Fan Zhang, David Bull
Neural video compression has recently demonstrated significant potential to compete with conventional video codecs in terms of rate-quality performance.
no code implementations • 13 Aug 2024 • Zihao Qi, Chen Feng, Fan Zhang, Xiaozhong Xu, Shan Liu, David Bull
Based on this collected subjective data, we benchmarked the performance of 10 full-reference and 11 no-reference quality metrics.
no code implementations • 9 Aug 2024 • Siyue Teng, YuXuan Jiang, Ge Gao, Fan Zhang, Thomas Davis, Zoe Liu, David Bull
Recent advances in video compression have seen significant coding performance improvements with the development of new standards and learning-based video codecs.
1 code implementation • 6 Aug 2024 • Jakub Nawała, YuXuan Jiang, Fan Zhang, Xiaoqing Zhu, Joel Sole, David Bull
Deep learning is now playing an important role in enhancing the performance of conventional hybrid video codecs.
1 code implementation • 16 Jul 2024 • Xinyi Wang, Angeliki Katsenou, David Bull
In this paper, we propose ReLaX-VQA, a novel No-Reference Video Quality Assessment (NR-VQA) model that aims to address the challenges of evaluating the quality of diverse video content without reference to the original uncompressed videos.
Ranked #1 on
Video Quality Assessment
on LIVE-VQC
(using extra training data)
no code implementations • 1 Jul 2024 • Crispian Morris, Nantheera Anantrasirichai, Fan Zhang, David Bull
In many real-world scenarios, recorded videos suffer from accidental focus blur, and while video deblurring methods exist, most specifically target motion blur.
no code implementations • 31 May 2024 • Chen Feng, Duolikun Danier, Fan Zhang, Alex Mackin, Andrew Collins, David Bull
In this paper, we propose a Multiple Visual Artifact Detector, MVAD, for video streaming which, for the first time, is able to detect multiple artifacts using a single framework that is not reliant on video quality assessment models.
no code implementations • 14 May 2024 • Tianhao Peng, Chen Feng, Duolikun Danier, Fan Zhang, Benoit Vallade, Alex Mackin, David Bull
The proposed method, RMT-BVQA, has been evaluated on the VDPVE (VQA Dataset for Perceptual Video Enhancement) database through a five-fold cross validation.
no code implementations • 15 Apr 2024 • YuXuan Jiang, Chen Feng, Fan Zhang, David Bull
Knowledge distillation (KD) has emerged as a promising technique in deep learning, typically employed to enhance a compact student network through learning from their high-performance but more complex teacher variant.
no code implementations • 4 Mar 2024 • Ruirui Lin, Nantheera Anantrasirichai, Alexandra Malyugina, David Bull
Distortions caused by low-light conditions are not only visually unpleasant but also degrade the performance of computer vision tasks.
no code implementations • 28 Feb 2024 • Joanne Lin, Nantheera Anantrasirichai, David Bull
Instance segmentation for low-light imagery remains largely unexplored due to the challenges imposed by such conditions, for example shot noise due to low photon count, color distortions and reduced contrast.
1 code implementation • 10 Feb 2024 • Angeliki Katsenou, Xinyi Wang, Daniel Schien, David Bull
Adaptive video streaming is a key enabler for optimising the delivery of offline encoded video content.
1 code implementation • 3 Feb 2024 • Nantheera Anantrasirichai, Ruirui Lin, Alexandra Malyugina, David Bull
Low-light videos often exhibit spatiotemporal incoherent noise, leading to poor visibility and compromised performance across various computer vision applications.
1 code implementation • 2 Feb 2024 • Ho Man Kwan, Fan Zhang, Andrew Gower, David Bull
In this paper we, for the first time, extend their application to immersive (multi-view) videos, by proposing MV-HiNeRV, a new INR-based immersive video codec.
no code implementations • 31 Dec 2023 • YuXuan Jiang, Jakub Nawala, Fan Zhang, David Bull
Deep learning techniques have been applied in the context of image super-resolution (SR), achieving remarkable advances in terms of reconstruction performance.
no code implementations • 19 Dec 2023 • Zihao Qi, Chen Feng, Duolikun Danier, Fan Zhang, Xiaozhong Xu, Shan Liu, David Bull
In this work, we observe that existing full-/no-reference quality metrics fail to accurately predict the perceptual quality difference between transcoded UGC content and the corresponding unpristine references.
1 code implementation • 19 Dec 2023 • Angeliki Katsenou, Xinyi Wang, Daniel Schien, David Bull
The environmental impact of video streaming services has been discussed as part of the strategies towards sustainable information and communication technologies.
no code implementations • 14 Dec 2023 • Chen Feng, Duolikun Danier, Haoran Wang, Fan Zhang, Benoit Vallade, Alex Mackin, David Bull
Deep learning-based video quality assessment (deep VQA) has demonstrated significant potential in surpassing conventional metrics, with promising improvements in terms of correlation with human perception.
no code implementations • 14 Dec 2023 • Chen Feng, Duolikun Danier, Fan Zhang, Alex Mackin, Andy Collins, David Bull
Professionally generated content (PGC) streamed online can contain visual artefacts that degrade the quality of user experience.
no code implementations • 5 Dec 2023 • Tianhao Peng, Ge Gao, Heming Sun, Fan Zhang, David Bull
In recent years, end-to-end learnt video codecs have demonstrated their potential to compete with conventional coding algorithms in term of compression efficiency.
no code implementations • 16 Sep 2023 • Alexandra Malyugina, Nantheera Anantrasirichai, David Bull
Despite extensive research conducted in the field of image denoising, many algorithms still heavily depend on supervised learning and their effectiveness primarily relies on the quality and diversity of training data.
1 code implementation • 13 Aug 2023 • Xinyi Wang, Angeliki Katsenou, David Bull
Preliminary results indicate that high correlations are achieved by using only deep features while adding saliency is not always boosting the performance.
1 code implementation • NeurIPS 2023 • Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull
Learning-based video compression is currently a popular research topic, offering the potential to compete with conventional standard video codecs.
Ranked #1 on
Video Reconstruction
on UVG
2 code implementations • 16 Mar 2023 • Duolikun Danier, Fan Zhang, David Bull
Existing works on video frame interpolation (VFI) mostly employ deep neural networks that are trained by minimizing the L1, L2, or deep feature space distance (e. g. VGG loss) between their outputs and ground-truth frames.
2 code implementations • 3 Oct 2022 • Duolikun Danier, Fan Zhang, David Bull
In order to narrow this research gap, we have developed a new video quality database named BVI-VFI, which contains 540 distorted sequences generated by applying five commonly used VFI algorithms to 36 diverse source videos with various spatial resolutions and frame rates.
no code implementations • 9 Aug 2022 • Alexandra Malyugina, Nantheera Anantrasirichai, David Bull
The loss function is a combination of $\ell_1$ or $\ell_2$ losses with the new persistence-based topological loss.
1 code implementation • 18 Jul 2022 • Chen Feng, Zihao Qi, Duolikun Danier, Fan Zhang, Xiaozhong Xu, Shan Liu, David Bull
In this work, we modify the MFRNet network architecture to enable multiple frame processing, and the new network, multi-frame MFRNet, has been integrated into the EBDA framework using two Versatile Video Coding (VVC) host codecs: VTM 16. 2 and the Fraunhofer Versatile Video Encoder (VVenC 1. 4. 0).
no code implementations • 17 Jul 2022 • Duolikun Danier, Fan Zhang, David Bull
Video frame interpolation (VFI) serves as a useful tool for many video processing applications.
no code implementations • 19 May 2022 • Duolikun Danier, Chen Feng, Fan Zhang, David Bull
This paper describes a CNN-based multi-frame post-processing approach based on a perceptually-inspired Generative Adversarial Network architecture, CVEGAN.
no code implementations • 4 Mar 2022 • Odysseas Pappas, Juliet Biggs, David Bull, Alin Achim, Nantheera Anantrasirichai
Monitoring of ground movement close to the rail corridor, such as that associated with landslips caused by ground subsidence and/or uplift, is of great interest for the detection and prevention of possible railway faults.
no code implementations • 25 Feb 2022 • Angeliki Katsenou, Fan Zhang, David Bull
In recent years, resolution adaptation based on deep neural networks has enabled significant performance gains for conventional (2D) video codecs.
no code implementations • 17 Feb 2022 • Chen Feng, Duolikun Danier, Fan Zhang, David Bull
In recent years, deep learning techniques have shown significant potential for improving video quality assessment (VQA), achieving higher correlation with subjective opinions compared to conventional approaches.
no code implementations • 15 Feb 2022 • Duolikun Danier, Fan Zhang, David Bull
This paper presents a new deformable convolution-based video frame interpolation (VFI) method, using a coarse to fine 3D CNN to enhance the multi-flow prediction.
no code implementations • 15 Feb 2022 • Duolikun Danier, Fan Zhang, David Bull
Video frame interpolation (VFI) is one of the fundamental research areas in video processing and there has been extensive research on novel and enhanced interpolation algorithms.
no code implementations • 30 Nov 2021 • Chen Feng, Duolikun Danier, Charlie Tan, Fan Zhang, David Bull
This paper presents a deep learning-based video compression framework (ViSTRA3).
3 code implementations • CVPR 2022 • Duolikun Danier, Fan Zhang, David Bull
Video frame interpolation (VFI) is currently a very active research topic, with applications spanning computer vision, post production and video encoding.
Ranked #1 on
Video Frame Interpolation
on SNU-FILM (easy)
no code implementations • 1 Jun 2021 • Annika Wong, Nantheera Anantrasirichai, Thanarat H. Chalidabhongse, Duangdao Palasuwan, Attakorn Palasuwan, David Bull
This paper presents an automated process utilising the advantages of machine learning to increase capacity and standardisation of cell abnormality detection, and its performance is analysed.
no code implementations • 18 Mar 2021 • Alex Mackin, Di Ma, Fan Zhang, David Bull
Bit depth adaptation, where the bit depth of a video sequence is reduced before transmission and up-sampled during display, can potentially reduce data rates with limited impact on perceptual quality.
no code implementations • 10 Mar 2021 • Fan Zhang, Angeliki Katsenou, Christos Bampis, Lukas Krasula, Zhi Li, David Bull
VMAF is a machine learning based video quality assessment method, originally designed for streaming applications, which combines multiple quality metrics and video features through SVM regression.
no code implementations • 26 Feb 2021 • Duolikun Danier, David Bull
Our study shows that video texture has significant impact on the performance of frame interpolation models and it is beneficial to have separate models specifically adapted to these texture classes, instead of training a single model that tries to learn generic motion.
no code implementations • 5 Jan 2021 • N. Anantrasirichai, David Bull
Experimental results show that our method outperforms existing approaches in terms of subjective quality and that it is robust to variations in brightness levels and noise.
no code implementations • 3 Oct 2020 • Fan Zhang, David Hall, Tao Xu, Stephen Boyle, David Bull
Methods for environmental image capture, 3D reconstruction (photogrammetry) and the creation of foreground assets are presented along with a flexible and user-friendly simulation interface.
no code implementations • 24 Jul 2020 • Nantheera Anantrasirichai, David Bull
We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity.
no code implementations • 7 May 2020 • Nantheera Anantrasirichai, Juliet Biggs, Krisztina Kelevitz, Zahra Sadeghi, Tim Wright, James Thompson, Alin Achim, David Bull
The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services.
no code implementations • 22 Dec 2019 • Jing Gao, N. Anantrasirichai, David Bull
This paper describes a novel deep learning-based method for mitigating the effects of atmospheric distortion.
1 code implementation • 17 May 2019 • Nantheera Anantrasirichai, Juliet Biggs, Fabien Albino, David Bull
As only a small proportion of volcanoes are deforming and atmospheric noise is ubiquitous, the use of machine learning for detecting volcanic unrest is more challenging.
1 code implementation • 1 Apr 2019 • N. Anantrasirichai, David Bull
As a data-driven method, the performance of deep convolutional neural networks (CNN) relies heavily on training data.
no code implementations • 10 Aug 2018 • N. Anantrasirichai, Alin Achim, David Bull
This paper describes a new method for mitigating the effects of atmospheric distortion on observed sequences that include large moving objects.