no code implementations • ECCV 2020 • Yujun Cai, Lin Huang, Yiwei Wang, Tat-Jen Cham, Jianfei Cai, Junsong Yuan, Jun Liu, Xu Yang, Yiheng Zhu, Xiaohui Shen, Ding Liu, Jing Liu, Nadia Magnenat Thalmann
Last, in order to incorporate a general motion space for high-quality prediction, we build a memory-based dictionary, which aims to preserve the global motion patterns in training data to guide the predictions.
no code implementations • 9 Jan 2025 • Shuo Tong, Han Liu, Runyuan Guo, Xueqiong Tian, Wenqing Wang, Ding Liu, Youmin Zhang
To address these challenges, we propose a general framework named LLM-TKESS (large language model for text-based knowledge-embedded soft sensing), harnessing the powerful general problem-solving capabilities, cross-modal knowledge transfer abilities, and few-shot capabilities of LLM for enhanced soft sensing modeling.
no code implementations • 6 Jan 2025 • Shuo Tong, Han Liu, Runyuan Guo, Wenqing Wang, Xueqiong Tian, Lingyun Wei, Lin Zhang, Huayong Wu, Ding Liu, Youmin Zhang
To achieve this, we propose a novel framework called the Few-shot Uncertainty-aware and self-Explaining Soft Sensor (LLM-FUESS), which includes the Zero-shot Auxiliary Variable Selector (LLM-ZAVS) and the Uncertainty-aware Few-shot Soft Sensor (LLM-UFSS).
1 code implementation • 11 Nov 2024 • Zhuorui Cui, Shengze Dong, Ding Liu
Our White-Box Diffusion Transformer combines the generative capabilities of Diffusion model with the mathematical interpretability of White-Box transformer.
no code implementations • 4 Nov 2024 • Kumara Kahatapitiya, Haozhe Liu, Sen He, Ding Liu, Menglin Jia, Chenyang Zhang, Michael S. Ryoo, Tian Xie
Generating temporally-consistent high-fidelity videos can be computationally expensive, especially over longer temporal spans.
no code implementations • 26 Oct 2024 • Haozhe Liu, Shikun Liu, Zijian Zhou, Mengmeng Xu, Yanping Xie, Xiao Han, Juan C. Pérez, Ding Liu, Kumara Kahatapitiya, Menglin Jia, Jui-Chieh Wu, Sen He, Tao Xiang, Jürgen Schmidhuber, Juan-Manuel Pérez-Rúa
We introduce MarDini, a new family of video diffusion models that integrate the advantages of masked auto-regression (MAR) into a unified diffusion model (DM) framework.
1 code implementation • 9 Apr 2024 • Shengze Dong, Zhuorui Cui, Ding Liu, Jinzhi Lei
The method is a neural network constructed based on Denoising Diffusion Probabilistic Models (DDPMs) and Diffusion Transformers (DiTs).
no code implementations • 24 May 2023 • Zhe Wang, ZhiJie He, Ding Liu
The article introduces a new method for applying Quantum Clustering to graph structures.
1 code implementation • NeurIPS 2023 • Zheng Chen, Yulun Zhang, Ding Liu, Bin Xia, Jinjin Gu, Linghe Kong, Xin Yuan
Specifically, we perform the DM in a highly compacted latent space to generate the prior feature for the deblurring process.
2 code implementations • 3 Feb 2023 • Lanqing Guo, Siyu Huang, Ding Liu, Hao Cheng, Bihan Wen
It is still challenging for the deep shadow removal model to exploit the global contextual correlation between shadow and non-shadow regions.
Ranked #1 on Shadow Removal on ISTD
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
1 code implementation • 18 Jul 2022 • Yuhao Huang, Hang Dong, Jinshan Pan, Chao Zhu, Yu Guo, Ding Liu, Lean Fu, Fei Wang
We develop two simple yet effective plug and play methods to improve the performance of existing local and non-local propagation-based VSR algorithms on widely-used public videos.
no code implementations • 12 Jul 2022 • Yiming Cui, Linjie Yang, Ding Liu
Object detection is a basic computer vision task to loccalize and categorize objects in a given image.
2 code implementations • 16 May 2022 • Fangyuan Kong, Mingxi Li, Songwei Liu, Ding Liu, Jingwen He, Yang Bai, Fangmin Chen, Lean Fu
Moreover, we revisit the popular contrastive loss and observe that the selection of intermediate features of its feature extractor has great influence on the performance.
2 code implementations • 11 May 2022 • Yawei Li, Kai Zhang, Radu Timofte, Luc van Gool, Fangyuan Kong, Mingxi Li, Songwei Liu, Zongcai Du, Ding Liu, Chenhui Zhou, Jingyi Chen, Qingrui Han, Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Yu Qiao, Chao Dong, Long Sun, Jinshan Pan, Yi Zhu, Zhikai Zong, Xiaoxiao Liu, Zheng Hui, Tao Yang, Peiran Ren, Xuansong Xie, Xian-Sheng Hua, Yanbo Wang, Xiaozhong Ji, Chuming Lin, Donghao Luo, Ying Tai, Chengjie Wang, Zhizhong Zhang, Yuan Xie, Shen Cheng, Ziwei Luo, Lei Yu, Zhihong Wen, Qi Wu1, Youwei Li, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Yuanfei Huang, Meiguang Jin, Hua Huang, Jing Liu, Xinjian Zhang, Yan Wang, Lingshun Long, Gen Li, Yuanfan Zhang, Zuowei Cao, Lei Sun, Panaetov Alexander, Yucong Wang, Minjie Cai, Li Wang, Lu Tian, Zheyuan Wang, Hongbing Ma, Jie Liu, Chao Chen, Yidong Cai, Jie Tang, Gangshan Wu, Weiran Wang, Shirui Huang, Honglei Lu, Huan Liu, Keyan Wang, Jun Chen, Shi Chen, Yuchun Miao, Zimo Huang, Lefei Zhang, Mustafa Ayazoğlu, Wei Xiong, Chengyi Xiong, Fei Wang, Hao Li, Ruimian Wen, Zhijing Yang, Wenbin Zou, Weixin Zheng, Tian Ye, Yuncheng Zhang, Xiangzhen Kong, Aditya Arora, Syed Waqas Zamir, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Dandan Gaoand Dengwen Zhouand Qian Ning, Jingzhu Tang, Han Huang, YuFei Wang, Zhangheng Peng, Haobo Li, Wenxue Guan, Shenghua Gong, Xin Li, Jun Liu, Wanjun Wang, Dengwen Zhou, Kun Zeng, Hanjiang Lin, Xinyu Chen, Jinsheng Fang
The aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29. 00dB on DIV2K validation set.
1 code implementation • 18 Apr 2022 • Zongcai Du, Ding Liu, Jie Liu, Jie Tang, Gangshan Wu, Lean Fu
Besides, FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
2 code implementations • ACM 2021 • Xiaotong Luo, Qiuyuan Liang, Ding Liu, Yanyun Qu
The joint-distillation learning consists of internal self-distillation and external mutual learning.
no code implementations • 19 May 2021 • Giovanni Bonetta, Rossella Cancelliere, Ding Liu, Paul Vozila
Transformer-based models have demonstrated excellent capabilities of capturing patterns and structures in natural language generation and achieved state-of-the-art results in many tasks.
no code implementations • 8 May 2021 • Zhanghao Zhouyin, Ding Liu
Understanding the informative behaviour of deep neural networks is challenged by misused estimators and the complexity of network structure, which leads to inconsistent observations and diversified interpretation.
2 code implementations • 12 Apr 2021 • Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen, Jan P. Allebach
In this paper, we explore open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label, even if the sketches of that class are missing in the training data.
Ranked #1 on Sketch-to-Image Translation on Scribble
1 code implementation • CVPR 2021 • Xueyan Zou, Linjie Yang, Ding Liu, Yong Jae Lee
To achieve this goal, it is necessary to find correspondences from neighbouring frames to faithfully hallucinate the unknown content.
no code implementations • ICCV 2021 • Yujun Cai, Yiwei Wang, Yiheng Zhu, Tat-Jen Cham, Jianfei Cai, Junsong Yuan, Jun Liu, Chuanxia Zheng, Sijie Yan, Henghui Ding, Xiaohui Shen, Ding Liu, Nadia Magnenat Thalmann
Notably, by considering this problem as a conditional generation process, we estimate a parametric distribution of the missing regions based on the input conditions, from which to sample and synthesize the full motion series.
1 code implementation • 7 Dec 2020 • Yang Fu, Linjie Yang, Ding Liu, Thomas S. Huang, Humphrey Shi
Video instance segmentation is a complex task in which we need to detect, segment, and track each object for any given video.
Ranked #32 on Video Instance Segmentation on YouTube-VIS validation
1 code implementation • NeurIPS 2020 • Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang, Yun Fu, Ding Liu, Thomas S. Huang
Inspired by the robustness and efficiency of sparse representation in sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
no code implementations • 8 Jun 2020 • Ding Liu, Hui Li
This approach, called Quantum Clustering (QC), deals with unlabeled data processing and constructs a potential function to find the centroids of clusters and the outliers.
1 code implementation • 15 May 2020 • Ding Liu, Jiaqi Yao, Zekun Yao, Quan Zhang
In this work, we propose the quantum-classical hybrid tensor networks (HTN) which combine tensor networks with classical neural networks in a uniform deep learning framework to overcome the limitations of regular tensor networks in machine learning.
no code implementations • 6 May 2020 • Wei Xiong, Ding Liu, Xiaohui Shen, Chen Fang, Jiebo Luo
In this paper, we tackle the problem of enhancing real-world low-light images with significant noise in an unsupervised fashion.
2 code implementations • 28 Apr 2020 • Yiqun Mei, Yuchen Fan, Yulun Zhang, Jiahui Yu, Yuqian Zhou, Ding Liu, Yun Fu, Thomas S. Huang, Humphrey Shi
Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales.
1 code implementation • 19 Dec 2019 • Yuchen Fan, Jiahui Yu, Ding Liu, Thomas S. Huang
In this paper, we show that properly modeling scale-invariance into neural networks can bring significant benefits to image restoration performance.
no code implementations • 7 Dec 2019 • Junru Wu, Xiang Yu, Ding Liu, Manmohan Chandraker, Zhangyang Wang
To train and evaluate on more diverse blur severity levels, we propose a Challenging DVD dataset generated from the raw DVD video set by pooling frames with different temporal windows.
no code implementations • 9 Nov 2019 • Junjie Li, Ding Liu
Recent years, many researches attempt to open the black box of deep neural networks and propose a various of theories to understand it.
8 code implementations • 17 Jun 2019 • Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data?
no code implementations • 26 Mar 2019 • Zheng-Zhi Sun, Cheng Peng, Ding Liu, Shi-Ju Ran, Gang Su
By investigating the distances in the many-body Hilbert space, we find that (a) the samples are naturally clustering in such a space; and (b) bounding the bond dimensions of the TN's to finite values corresponds to removing redundant information in the image recognition.
1 code implementation • 6 Sep 2018 • Ding Liu, Bihan Wen, Jianbo Jiao, Xian-Ming Liu, Zhangyang Wang, Thomas S. Huang
Second we propose a deep neural network solution that cascades two modules for image denoising and various high-level tasks, respectively, and use the joint loss for updating only the denoising network via back-propagation.
1 code implementation • 29 Jul 2018 • Ramakrishna Prabhu, Xiaojing Yu, Zhangyang Wang, Ding Liu, Anxiao, Jiang
This paper studies the challenging problem of fingerprint image denoising and inpainting.
1 code implementation • NeurIPS 2018 • Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, Thomas S. Huang
The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood.
Ranked #1 on Grayscale Image Denoising on Set12 sigma30
1 code implementation • CVPR 2018 • Wei Han, Shiyu Chang, Ding Liu, Mo Yu, Michael Witbrock, Thomas S. Huang
Advances in image super-resolution (SR) have recently benefited significantly from rapid developments in deep neural networks.
Ranked #48 on Image Super-Resolution on BSD100 - 4x upscaling
no code implementations • 19 Apr 2018 • Yuqian Zhou, Ding Liu, Thomas Huang
However, previous proposed models are mostly trained and tested on good-quality images which are not always the case for practical applications like surveillance systems.
no code implementations • 16 Apr 2018 • Hongyu Xu, Zhangyang Wang, Haichuan Yang, Ding Liu, Ji Liu
The thresholded feature has recently emerged as an extremely efficient, yet rough empirical approximation, of the time-consuming sparse coding inference process.
no code implementations • ICLR 2018 • Ding Liu, Shi-Ju Ran, Peter Wittek, Cheng Peng, Raul Blázquez García, Gang Su, Maciej Lewenstein
The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention.
no code implementations • 20 Dec 2017 • Ding Liu, Bowen Cheng, Zhangyang Wang, Haichao Zhang, Thomas S. Huang
Visual recognition under adverse conditions is a very important and challenging problem of high practical value, due to the ubiquitous existence of quality distortions during image acquisition, transmission, or storage.
2 code implementations • 4 Dec 2017 • Zhiqiang Shen, Honghui Shi, Jiahui Yu, Hai Phan, Rogerio Feris, Liangliang Cao, Ding Liu, Xinchao Wang, Thomas Huang, Marios Savvides
In this paper, we present a simple and parameter-efficient drop-in module for one-stage object detectors like SSD when learning from scratch (i. e., without pre-trained models).
3 code implementations • ICLR 2018 • Ding Liu, Shi-Ju Ran, Peter Wittek, Cheng Peng, Raul Blázquez García, Gang Su, Maciej Lewenstein
We study the quantum features of the TN states, including quantum entanglement and fidelity.
no code implementations • ICCV 2017 • Ding Liu, Zhaowen Wang, Yuchen Fan, Xian-Ming Liu, Zhangyang Wang, Shiyu Chang, Thomas Huang
Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network that is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner.
no code implementations • 10 Sep 2017 • Bowen Cheng, Zhangyang Wang, Zhaobin Zhang, Zhu Li, Ding Liu, Jianchao Yang, Shuai Huang, Thomas S. Huang
Emotion recognition from facial expressions is tremendously useful, especially when coupled with smart devices and wireless multimedia applications.
no code implementations • 27 Jul 2017 • Zixing Zhang, Ding Liu, Jing Han, Kun Qian, Björn Schuller
Extensive evaluation on a large-size acoustic event database is performed, and the empirical results demonstrate that the learnt audio sequence representation yields a significant performance improvement by a large margin compared with other state-of-the-art hand-crafted sequence features for AEC.
2 code implementations • 14 Jun 2017 • Ding Liu, Bihan Wen, Xianming Liu, Zhangyang Wang, Thomas S. Huang
Conventionally, image denoising and high-level vision tasks are handled separately in computer vision.
5 code implementations • 27 Feb 2017 • Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell
This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation.
Ranked #20 on Semantic Segmentation on PASCAL VOC 2012 test
no code implementations • 3 Jan 2017 • Ding Liu, Zhaowen Wang, Nasser Nasrabadi, Thomas Huang
This paper proposes the method of learning a mixture of SR inference modules in a unified framework to tackle this problem.
1 code implementation • journals 2016 • Ding Liu, Zhaowen Wang, Bihan Wen, Student Member, Jianchao Yang, Member, Wei Han, and Thomas S. Huang, Fellow, IEEE
We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data.
no code implementations • CVPR 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 16 Jan 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain ($\mathbf{D^3}$) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 16 Jan 2016 • Zhangyang Wang, Shiyu Chang, Florin Dolcos, Diane Beck, Ding Liu, Thomas S. Huang
Image aesthetics assessment has been challenging due to its subjective nature.
no code implementations • CVPR 2016 • Zhangyang Wang, Shiyu Chang, Yingzhen Yang, Ding Liu, Thomas S. Huang
Visual recognition research often assumes a sufficient resolution of the region of interest (ROI).
no code implementations • ICCV 2015 • Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, Thomas Huang
We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end.
no code implementations • CVPR 2014 • Ding Liu, Xida Chen, Yee-Hong Yang
3D reconstruction of transparent and specular objects is a very challenging topic in computer vision.