Search Results for author: Fanhua Shang

Found 45 papers, 8 papers with code

Generalized Higher-Order Tensor Decomposition via Parallel ADMM

no code implementations5 Jul 2014 Fanhua Shang, Yuanyuan Liu, James Cheng

To address these problems, we first propose a parallel trace norm regularized tensor decomposition method, and formulate it as a convex optimization problem.

Computational Efficiency Tensor Decomposition

Structured Low-Rank Matrix Factorization with Missing and Grossly Corrupted Observations

no code implementations3 Sep 2014 Fanhua Shang, Yuanyuan Liu, Hanghang Tong, James Cheng, Hong Cheng

In this paper, we propose a scalable, provable structured low-rank matrix factorization method to recover low-rank and sparse matrices from missing and grossly corrupted data, i. e., robust matrix completion (RMC) problems, or incomplete and grossly corrupted measurements, i. e., compressive principal component pursuit (CPCP) problems.

Matrix Completion

Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion

no code implementations NeurIPS 2014 Yuanyuan Liu, Fanhua Shang, Wei Fan, James Cheng, Hong Cheng

Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM problem.

Tensor Decomposition

Regularized Orthogonal Tensor Decompositions for Multi-Relational Learning

no code implementations26 Dec 2015 Fanhua Shang, James Cheng, Hong Cheng

We first induce the equivalence relation of the Schatten p-norm (0<p<\infty) of a low multi-linear rank tensor and its core tensor.

Relational Reasoning

Unified Scalable Equivalent Formulations for Schatten Quasi-Norms

no code implementations2 Jun 2016 Fanhua Shang, Yuanyuan Liu, James Cheng

In this paper, we rigorously prove that for any p, p1, p2>0 satisfying 1/p=1/p1+1/p2, the Schatten-p quasi-norm of any matrix is equivalent to minimizing the product of the Schatten-p1 norm (or quasi-norm) and Schatten-p2 norm (or quasi-norm) of its two factor matrices.

Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization

no code implementations4 Jun 2016 Fanhua Shang, Yuanyuan Liu, James Cheng

In this paper, we first define two tractable Schatten quasi-norms, i. e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively, which lead to the design of very efficient algorithms that only need to update two much smaller factor matrices.

Matrix Completion

Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent

no code implementations20 Mar 2017 Fanhua Shang, Yuanyuan Liu, James Cheng, Kelvin Kai Wing Ng, Yuichi Yoshida

In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.

Stochastic Optimization

Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning

no code implementations23 Mar 2017 Fanhua Shang, Yuanyuan Liu, James Cheng, Jiacheng Zhuo

Recently, research on accelerated stochastic gradient descent methods (e. g., SVRG) has made exciting progress (e. g., linear convergence for strongly convex problems).

BIG-bench Machine Learning regression

Larger is Better: The Effect of Learning Rates Enjoyed by Stochastic Optimization with Progressive Variance Reduction

no code implementations17 Apr 2017 Fanhua Shang

This setting allows us to use much larger learning rates or step sizes than SVRG, e. g., 3/(7L) for VR-SGD vs 1/(10L) for SVRG, and also makes our convergence analysis more challenging.

Stochastic Optimization

Accelerated Variance Reduced Stochastic ADMM

no code implementations11 Jul 2017 Yuanyuan Liu, Fanhua Shang, James Cheng

Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T) to O(1/T^2).

Accelerated First-order Methods for Geodesically Convex Optimization on Riemannian Manifolds

no code implementations NeurIPS 2017 Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao

In this paper, we propose an accelerated first-order method for geodesically convex optimization, which is the generalization of the standard Nesterov's accelerated method from Euclidean space to nonlinear Riemannian space.

VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning

1 code implementation26 Feb 2018 Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao

In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).

BIG-bench Machine Learning

Guaranteed Sufficient Decrease for Stochastic Variance Reduced Gradient Optimization

no code implementations26 Feb 2018 Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida

In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.

Stochastic Optimization

Tractable and Scalable Schatten Quasi-Norm Approximations for Rank Minimization

no code implementations28 Feb 2018 Fanhua Shang, Yuanyuan Liu, James Cheng

The Schatten quasi-norm was introduced to bridge the gap between the trace norm and rank function.

A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates

no code implementations ICML 2018 Kaiwen Zhou, Fanhua Shang, James Cheng

Recent years have witnessed exciting progress in the study of stochastic variance reduced gradient methods (e. g., SVRG, SAGA), their accelerated variants (e. g, Katyusha) and their extensions in many different settings (e. g., online, sparse, asynchronous, distributed).

A Unified Approximation Framework for Compressing and Accelerating Deep Neural Networks

no code implementations26 Jul 2018 Yuzhe Ma, Ran Chen, Wei Li, Fanhua Shang, Wenjian Yu, Minsik Cho, Bei Yu

To address this issue, various approximation techniques have been investigated, which seek for a light weighted network with little performance degradation in exchange of smaller model size or faster inference.

General Classification Image Classification +1

ASVRG: Accelerated Proximal SVRG

no code implementations7 Oct 2018 Fanhua Shang, Licheng Jiao, Kaiwen Zhou, James Cheng, Yan Ren, Yufei Jin

This paper proposes an accelerated proximal stochastic variance reduced gradient (ASVRG) method, in which we design a simple and effective momentum acceleration trick.

Efficient Computation of Quantized Neural Networks by {−1, +1} Encoding Decomposition

no code implementations8 Oct 2018 Qigong Sun, Fanhua Shang, Xiufang Li, Kang Yang, Peizhuo Lv, Licheng Jiao

Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability.

Image Classification Model Compression +2

Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications

no code implementations11 Oct 2018 Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-Quan Luo, Zhouchen Lin

The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment.

Moving Object Detection object-detection

Multi-Precision Quantized Neural Networks via Encoding Decomposition of -1 and +1

no code implementations31 May 2019 Qigong Sun, Fanhua Shang, Kang Yang, Xiufang Li, Yan Ren, Licheng Jiao

The training of deep neural networks (DNNs) requires intensive resources both for computation and for storage performance.

Image Classification Model Compression +2

signADAM: Learning Confidences for Deep Neural Networks

1 code implementation21 Jul 2019 Dong Wang, Yicheng Liu, Wenwo Tang, Fanhua Shang, Hongying Liu, Qigong Sun, Licheng Jiao

In this paper, we propose a new first-order gradient-based algorithm to train deep neural networks.

Efficient High-Dimensional Data Representation Learning via Semi-Stochastic Block Coordinate Descent Methods

no code implementations25 Sep 2019 Bingkun Wei, Yangyang Li, Fanhua Shang, Yuanyuan Liu, Hongying Liu, ShengMei Shen

To address this issue, we propose a novel hard thresholding algorithm, called Semi-stochastic Block Coordinate Descent Hard Thresholding Pursuit (SBCD-HTP).

Face Recognition Representation Learning

Accelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems

no code implementations25 Sep 2019 Fanhua Shang, Lin Kong, Yuanyuan Liu, Hua Huang, Hongying Liu

Moreover, our theoretical analysis shows that AVR-SExtraGD enjoys the best-known convergence rates and oracle complexities of stochastic first-order algorithms such as Katyusha for both strongly convex and non-strongly convex problems.

BIG-bench Machine Learning Face Recognition +1

Deep Residual-Dense Lattice Network for Speech Enhancement

2 code implementations27 Feb 2020 Mohammad Nikzad, Aaron Nicolson, Yongsheng Gao, Jun Zhou, Kuldip K. Paliwal, Fanhua Shang

Motivated by this, we propose the residual-dense lattice network (RDL-Net), which is a new CNN for speech enhancement that employs both residual and dense aggregations without over-allocating parameters for feature re-usage.

Speech Enhancement

Video Super Resolution Based on Deep Learning: A Comprehensive Survey

no code implementations25 Jul 2020 Hongying Liu, Zhubo Ruan, Peng Zhao, Chao Dong, Fanhua Shang, Yuanyuan Liu, Linlin Yang, Radu Timofte

To the best of our knowledge, this work is the first systematic review on VSR tasks, and it is expected to make a contribution to the development of recent studies in this area and potentially deepen our understanding to the VSR techniques based on deep learning.

speech-recognition Speech Recognition +1

A Single Frame and Multi-Frame Joint Network for 360-degree Panorama Video Super-Resolution

2 code implementations24 Aug 2020 Hongying Liu, Zhubo Ruan, Chaowei Fang, Peng Zhao, Fanhua Shang, Yuanyuan Liu, Lijun Wang

Spherical videos, also known as \ang{360} (panorama) videos, can be viewed with various virtual reality devices such as computers and head-mounted displays.

Video Super-Resolution

Differentially Private ADMM Algorithms for Machine Learning

no code implementations31 Oct 2020 Tao Xu, Fanhua Shang, Yuanyuan Liu, Hongying Liu, Longjie Shen, Maoguo Gong

For smooth convex loss functions with (non)-smooth regularization, we propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(\epsilon,\delta)$-differential privacy ($(\epsilon,\delta)$-DP).

BIG-bench Machine Learning

Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks

no code implementations29 Nov 2020 Pengtao Xu, Jian Cao, Fanhua Shang, Wenyu Sun, Pu Li

For layer pruning, we convert convolutional layers of network into ResConv with a layer scaling factor.

Effective and Fast: A Novel Sequential Single Path Search for Mixed-Precision Quantization

no code implementations4 Mar 2021 Qigong Sun, Licheng Jiao, Yan Ren, Xiufang Li, Fanhua Shang, Fang Liu

Since model quantization helps to reduce the model size and computation latency, it has been successfully applied in many applications of mobile phones, embedded devices and smart chips.

Quantization

MWQ: Multiscale Wavelet Quantized Neural Networks

no code implementations9 Mar 2021 Qigong Sun, Yan Ren, Licheng Jiao, Xiufang Li, Fanhua Shang, Fang Liu

Inspired by the characteristics of images in the frequency domain, we propose a novel multiscale wavelet quantization (MWQ) method.

Model Compression Quantization

Large Motion Video Super-Resolution with Dual Subnet and Multi-Stage Communicated Upsampling

no code implementations22 Mar 2021 Hongying Liu, Peng Zhao, Zhubo Ruan, Fanhua Shang, Yuanyuan Liu

In this paper, we propose a novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion.

Motion Compensation Motion Estimation +1

Learned Interpretable Residual Extragradient ISTA for Sparse Coding

no code implementations22 Jun 2021 Lin Kong, Wei Sun, Fanhua Shang, Yuanyuan Liu, Hongying Liu

Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions.

Rethinking Rehearsal in Lifelong Learning: Does An Example Contribute the Plasticity or Stability?

no code implementations29 Sep 2021 Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, Liang Wan

Traditionally, the primary goal of LL is to achieve the trade-off between the Stability (remembering past tasks) and Plasticity (adapting to new tasks).

Multi-Task Learning

Exploring Example Influence in Continual Learning

1 code implementation25 Sep 2022 Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, Liang Wan

Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S, remembering past tasks) and Plasticity (P, adapting to new tasks).

Continual Learning

Measuring Asymmetric Gradient Discrepancy in Parallel Continual Learning

no code implementations ICCV 2023 Fan Lyu, Qing Sun, Fanhua Shang, Liang Wan, Wei Feng

In Parallel Continual Learning (PCL), the parallel multiple tasks start and end training unpredictably, thus suffering from training conflict and catastrophic forgetting issues.

Continual Learning

Boosting Adversarial Transferability by Achieving Flat Local Maxima

2 code implementations NeurIPS 2023 Zhijin Ge, Hongying Liu, Xiaosen Wang, Fanhua Shang, Yuanyuan Liu

Extensive experimental results on the ImageNet-compatible dataset show that the proposed method can generate adversarial examples at flat local regions, and significantly improve the adversarial transferability on either normally trained models or adversarially trained models than the state-of-the-art attacks.

Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer

2 code implementations21 Aug 2023 Zhijin Ge, Fanhua Shang, Hongying Liu, Yuanyuan Liu, Liang Wan, Wei Feng, Xiaosen Wang

Deep neural networks are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on clean inputs.

Domain Generalization Style Transfer

Long-Tailed Learning as Multi-Objective Optimization

no code implementations31 Oct 2023 Weiqi Li, Fan Lyu, Fanhua Shang, Liang Wan, Wei Feng

Real-world data is extremely imbalanced and presents a long-tailed distribution, resulting in models that are biased towards classes with sufficient samples and perform poorly on rare classes.

Elastic Multi-Gradient Descent for Parallel Continual Learning

no code implementations2 Jan 2024 Fan Lyu, Wei Feng, Yuepan Li, Qing Sun, Fanhua Shang, Liang Wan, Liang Wang

The goal of Continual Learning (CL) is to continuously learn from new data streams and accomplish the corresponding tasks.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.