1 code implementation • 25 Sep 2022 • Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, Liang Wan
Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S, remembering past tasks) and Plasticity (P, adapting to new tasks).
no code implementations • 29 Sep 2021 • Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, Liang Wan
Traditionally, the primary goal of LL is to achieve the trade-off between the Stability (remembering past tasks) and Plasticity (adapting to new tasks).
no code implementations • 23 Jun 2021 • Hua Huang, Fanhua Shang, Yuanyuan Liu, Hongying Liu
Unlike existing FL methods, our IGFL can be applied to both client and server optimization.
no code implementations • 22 Jun 2021 • Lin Kong, Wei Sun, Fanhua Shang, Yuanyuan Liu, Hongying Liu
Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions.
1 code implementation • 18 Jun 2021 • Qigong Sun, Xiufang Li, Fanhua Shang, Hongying Liu, Kang Yang, Licheng Jiao, Zhouchen Lin
The training of deep neural networks (DNNs) always requires intensive resources for both computation and data storage.
no code implementations • 22 Mar 2021 • Hongying Liu, Peng Zhao, Zhubo Ruan, Fanhua Shang, Yuanyuan Liu
In this paper, we propose a novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion.
no code implementations • 9 Mar 2021 • Qigong Sun, Yan Ren, Licheng Jiao, Xiufang Li, Fanhua Shang, Fang Liu
Inspired by the characteristics of images in the frequency domain, we propose a novel multiscale wavelet quantization (MWQ) method.
no code implementations • 4 Mar 2021 • Qigong Sun, Licheng Jiao, Yan Ren, Xiufang Li, Fanhua Shang, Fang Liu
Since model quantization helps to reduce the model size and computation latency, it has been successfully applied in many applications of mobile phones, embedded devices and smart chips.
no code implementations • 29 Nov 2020 • Pengtao Xu, Jian Cao, Fanhua Shang, Wenyu Sun, Pu Li
For layer pruning, we convert convolutional layers of network into ResConv with a layer scaling factor.
no code implementations • 31 Oct 2020 • Tao Xu, Fanhua Shang, Yuanyuan Liu, Hongying Liu, Longjie Shen, Maoguo Gong
For smooth convex loss functions with (non)-smooth regularization, we propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(\epsilon,\delta)$-differential privacy ($(\epsilon,\delta)$-DP).
no code implementations • 21 Oct 2020 • Hongying Liu, Zhenyu Zhou, Fanhua Shang, Xiaoyu Qi, Yuanyuan Liu, Licheng Jiao
Existing white-box attack algorithms can generate powerful adversarial examples.
2 code implementations • 24 Aug 2020 • Hongying Liu, Zhubo Ruan, Chaowei Fang, Peng Zhao, Fanhua Shang, Yuanyuan Liu, Lijun Wang
Spherical videos, also known as \ang{360} (panorama) videos, can be viewed with various virtual reality devices such as computers and head-mounted displays.
no code implementations • 25 Jul 2020 • Hongying Liu, Zhubo Ruan, Peng Zhao, Chao Dong, Fanhua Shang, Yuanyuan Liu, Linlin Yang, Radu Timofte
To the best of our knowledge, this work is the first systematic review on VSR tasks, and it is expected to make a contribution to the development of recent studies in this area and potentially deepen our understanding to the VSR techniques based on deep learning.
no code implementations • 19 Apr 2020 • Yang Hu, Xiaying Bai, Pan Zhou, Fanhua Shang, ShengMei Shen
Pedestrian attribute recognition is an important multi-label classification problem.
1 code implementation • 27 Feb 2020 • Mohammad Nikzad, Aaron Nicolson, Yongsheng Gao, Jun Zhou, Kuldip K. Paliwal, Fanhua Shang
Motivated by this, we propose the residual-dense lattice network (RDL-Net), which is a new CNN for speech enhancement that employs both residual and dense aggregations without over-allocating parameters for feature re-usage.
Ranked #11 on
Speech Enhancement
on VoiceBank + DEMAND
no code implementations • 2 Dec 2019 • Fanhua Shang, Bingkun Wei, Hongying Liu, Yuanyuan Liu, Jiacheng Zhuo
Large-scale non-convex sparsity-constrained problems have recently gained extensive attention.
no code implementations • 25 Sep 2019 • Bingkun Wei, Yangyang Li, Fanhua Shang, Yuanyuan Liu, Hongying Liu, ShengMei Shen
To address this issue, we propose a novel hard thresholding algorithm, called Semi-stochastic Block Coordinate Descent Hard Thresholding Pursuit (SBCD-HTP).
no code implementations • 25 Sep 2019 • Fanhua Shang, Lin Kong, Yuanyuan Liu, Hua Huang, Hongying Liu
Moreover, our theoretical analysis shows that AVR-SExtraGD enjoys the best-known convergence rates and oracle complexities of stochastic first-order algorithms such as Katyusha for both strongly convex and non-strongly convex problems.
1 code implementation • 21 Jul 2019 • Dong Wang, Yicheng Liu, Wenwo Tang, Fanhua Shang, Hongying Liu, Qigong Sun, Licheng Jiao
In this paper, we propose a new first-order gradient-based algorithm to train deep neural networks.
no code implementations • 17 Jul 2019 • Hongying Liu, Xiongjie Shen, Fanhua Shang, Fei Wang
This paper proposes a novel cascaded U-Net for brain tumor segmentation.
no code implementations • 31 May 2019 • Qigong Sun, Fanhua Shang, Kang Yang, Xiufang Li, Yan Ren, Licheng Jiao
The training of deep neural networks (DNNs) requires intensive resources both for computation and for storage performance.
no code implementations • 11 Oct 2018 • Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-Quan Luo, Zhouchen Lin
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment.
no code implementations • 8 Oct 2018 • Qigong Sun, Fanhua Shang, Xiufang Li, Kang Yang, Peizhuo Lv, Licheng Jiao
Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability.
no code implementations • 7 Oct 2018 • Fanhua Shang, Licheng Jiao, Kaiwen Zhou, James Cheng, Yan Ren, Yufei Jin
This paper proposes an accelerated proximal stochastic variance reduced gradient (ASVRG) method, in which we design a simple and effective momentum acceleration trick.
no code implementations • 26 Jul 2018 • Yuzhe Ma, Ran Chen, Wei Li, Fanhua Shang, Wenjian Yu, Minsik Cho, Bei Yu
To address this issue, various approximation techniques have been investigated, which seek for a light weighted network with little performance degradation in exchange of smaller model size or faster inference.
no code implementations • ICML 2018 • Kaiwen Zhou, Fanhua Shang, James Cheng
Recent years have witnessed exciting progress in the study of stochastic variance reduced gradient methods (e. g., SVRG, SAGA), their accelerated variants (e. g, Katyusha) and their extensions in many different settings (e. g., online, sparse, asynchronous, distributed).
no code implementations • 28 Feb 2018 • Fanhua Shang, Yuanyuan Liu, James Cheng
The Schatten quasi-norm was introduced to bridge the gap between the trace norm and rank function.
1 code implementation • 26 Feb 2018 • Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao
In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).
no code implementations • 26 Feb 2018 • Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida
In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.
no code implementations • NeurIPS 2017 • Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao
In this paper, we propose an accelerated first-order method for geodesically convex optimization, which is the generalization of the standard Nesterov's accelerated method from Euclidean space to nonlinear Riemannian space.
no code implementations • 11 Jul 2017 • Yuanyuan Liu, Fanhua Shang, James Cheng
Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T) to O(1/T^2).
no code implementations • 17 Apr 2017 • Fanhua Shang
This setting allows us to use much larger learning rates or step sizes than SVRG, e. g., 3/(7L) for VR-SGD vs 1/(10L) for SVRG, and also makes our convergence analysis more challenging.
no code implementations • 23 Mar 2017 • Fanhua Shang, Yuanyuan Liu, James Cheng, Jiacheng Zhuo
Recently, research on accelerated stochastic gradient descent methods (e. g., SVRG) has made exciting progress (e. g., linear convergence for strongly convex problems).
no code implementations • 20 Mar 2017 • Fanhua Shang, Yuanyuan Liu, James Cheng, Kelvin Kai Wing Ng, Yuichi Yoshida
In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.
no code implementations • 4 Jun 2016 • Fanhua Shang, Yuanyuan Liu, James Cheng
In this paper, we first define two tractable Schatten quasi-norms, i. e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively, which lead to the design of very efficient algorithms that only need to update two much smaller factor matrices.
no code implementations • 2 Jun 2016 • Fanhua Shang, Yuanyuan Liu, James Cheng
In this paper, we rigorously prove that for any p, p1, p2>0 satisfying 1/p=1/p1+1/p2, the Schatten-p quasi-norm of any matrix is equivalent to minimizing the product of the Schatten-p1 norm (or quasi-norm) and Schatten-p2 norm (or quasi-norm) of its two factor matrices.
no code implementations • 26 Dec 2015 • Fanhua Shang, James Cheng, Hong Cheng
We first induce the equivalence relation of the Schatten p-norm (0<p<\infty) of a low multi-linear rank tensor and its core tensor.
no code implementations • NeurIPS 2014 • Yuanyuan Liu, Fanhua Shang, Wei Fan, James Cheng, Hong Cheng
Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM problem.
no code implementations • 3 Sep 2014 • Fanhua Shang, Yuanyuan Liu, Hanghang Tong, James Cheng, Hong Cheng
In this paper, we propose a scalable, provable structured low-rank matrix factorization method to recover low-rank and sparse matrices from missing and grossly corrupted data, i. e., robust matrix completion (RMC) problems, or incomplete and grossly corrupted measurements, i. e., compressive principal component pursuit (CPCP) problems.
no code implementations • 5 Jul 2014 • Fanhua Shang, Yuanyuan Liu, James Cheng
To address these problems, we first propose a parallel trace norm regularized tensor decomposition method, and formulate it as a convex optimization problem.