no code implementations • 24 Nov 2022 • Yuanyuan Liu, Wenbin Wang, Yibing Zhan, Zhe Chen, Shaoze Feng, Kejun Liu
Self-supervised facial representation has recently attracted increasing attention due to its ability to perform face understanding without relying on large-scale annotated datasets heavily.
no code implementations • 12 Oct 2022 • Yuanyuan Liu, Chengjiang Long, Zhaoxuan Zhang, Bokai Liu, Qiang Zhang, BaoCai Yin, Xin Yang
3D scene graph generation (SGG) has been of high interest in computer vision.
no code implementations • 2 Sep 2022 • Zheng Liu, Sijing Zhan, Yaowu Zhao, Yuanyuan Liu, Renjie Chen, Ying He
Motivated by the essential interplay between point cloud denoising and normal filtering, we revisit point cloud denoising from a multitask perspective, and propose an end-to-end network, named PCDNF, to denoise point clouds via joint normal filtering.
1 code implementation • 11 Aug 2022 • Zhuo-Xu Cui, Sen Jia, Qingyong Zhu, Congcong Liu, Zhilang Qiu, Yuanyuan Liu, Jing Cheng, Haifeng Wang, Yanjie Zhu, Dong Liang
Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data.
no code implementations • 1 Aug 2022 • Yuanyuan Liu, Wei Dai, Chuanxu Feng, Wenbin Wang, Guanghao Yin, Jiabei Zeng, Shiguang Shan
To the best of our knowledge, MAFW is the first in-the-wild multi-modal database annotated with compound emotion annotations and emotion-related captions.
no code implementations • 17 Jul 2022 • Yuanyuan Liu, Dong Liang, Zhuo-Xu Cui, Yuxin Yang, Chentao Cao, Qingyong Zhu, Jing Cheng, Caiyun Shi, Haifeng Wang, Yanjie Zhu
Prospective reconstruction results further demonstrate the capability of the SMART method in accelerating MR T1\r{ho} imaging.
no code implementations • 13 Jul 2022 • Yuanyuan Liu
In the present paper, a novel vector field decomposition based approach for constructing Lyapunov functions is proposed.
no code implementations • 13 Jul 2022 • Yuanyuan Liu
If a global minimum is kept in the remaining region of each iteration, then it can be located with an arbitrary precision.
no code implementations • 17 Jan 2022 • Jie Song, Huawei Yi, Wenqian Xu, Xiaohui Li, Bo Li, Yuanyuan Liu
The proposal of perceptual loss solves the problem that per-pixel difference loss function causes the reconstructed image to be overly-smooth, which acquires a significant progress in the field of single image super-resolution reconstruction.
no code implementations • 18 Dec 2021 • Zhuo-Xu Cui, Jing Cheng, Qingyong Zhu, Yuanyuan Liu, Sen Jia, Kankan Zhao, Ziwen Ke, Wenqi Huang, Haifeng Wang, Yanjie Zhu, Dong Liang
Specifically, focusing on accelerated MRI, we unroll a zeroth-order algorithm, of which the network module represents the regularizer itself, so that the network output can be still covered by the regularization model.
no code implementations • 1 Dec 2021 • Yanjie Zhu, Haoxiang Li, Yuanyuan Liu, Muzi Guo, Guanxun Cheng, Gang Yang, Haifeng Wang, Dong Liang
Methods: The proposed framework consists of a reconstruction module and a generative module.
no code implementations • 17 Sep 2021 • Yuanyuan Liu, Wenbin Wang, Chuanxu Feng, Haoyu Zhang, Zhe Chen, Yibing Zhan
To this end, we propose to decompose each video into a series of expression snippets, each of which contains a small number of facial movements, and attempt to augment the Transformer's ability for modeling intra-snippet and inter-snippet visual relations, respectively, obtaining the Expression snippet Transformer (EST).
1 code implementation • 16 Aug 2021 • Yuanyuan Liu, Nelly Penttilä, Tiina Ihalainen, Juulia Lintula, Rachel Convey, Okko Räsänen
Experimental results on a Finnish PD speech corpus demonstrate the efficacy and reliability of the proposed automatic method in deriving VAI, VSA, FCR and F2i/F2u (the second formant ratio for vowels /i/ and /u/).
no code implementations • 23 Jun 2021 • Hua Huang, Fanhua Shang, Yuanyuan Liu, Hongying Liu
Unlike existing FL methods, our IGFL can be applied to both client and server optimization.
no code implementations • 22 Jun 2021 • Lin Kong, Wei Sun, Fanhua Shang, Yuanyuan Liu, Hongying Liu
Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions.
no code implementations • 22 Mar 2021 • Hongying Liu, Peng Zhao, Zhubo Ruan, Fanhua Shang, Yuanyuan Liu
In this paper, we propose a novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion.
no code implementations • 31 Oct 2020 • Tao Xu, Fanhua Shang, Yuanyuan Liu, Hongying Liu, Longjie Shen, Maoguo Gong
For smooth convex loss functions with (non)-smooth regularization, we propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(\epsilon,\delta)$-differential privacy ($(\epsilon,\delta)$-DP).
no code implementations • 21 Oct 2020 • Hongying Liu, Zhenyu Zhou, Fanhua Shang, Xiaoyu Qi, Yuanyuan Liu, Licheng Jiao
Existing white-box attack algorithms can generate powerful adversarial examples.
2 code implementations • 24 Aug 2020 • Hongying Liu, Zhubo Ruan, Chaowei Fang, Peng Zhao, Fanhua Shang, Yuanyuan Liu, Lijun Wang
Spherical videos, also known as \ang{360} (panorama) videos, can be viewed with various virtual reality devices such as computers and head-mounted displays.
no code implementations • 25 Jul 2020 • Hongying Liu, Zhubo Ruan, Peng Zhao, Chao Dong, Fanhua Shang, Yuanyuan Liu, Linlin Yang, Radu Timofte
To the best of our knowledge, this work is the first systematic review on VSR tasks, and it is expected to make a contribution to the development of recent studies in this area and potentially deepen our understanding to the VSR techniques based on deep learning.
no code implementations • 2 Dec 2019 • Fanhua Shang, Bingkun Wei, Hongying Liu, Yuanyuan Liu, Jiacheng Zhuo
Large-scale non-convex sparsity-constrained problems have recently gained extensive attention.
no code implementations • 25 Sep 2019 • Bingkun Wei, Yangyang Li, Fanhua Shang, Yuanyuan Liu, Hongying Liu, ShengMei Shen
To address this issue, we propose a novel hard thresholding algorithm, called Semi-stochastic Block Coordinate Descent Hard Thresholding Pursuit (SBCD-HTP).
no code implementations • 25 Sep 2019 • Fanhua Shang, Lin Kong, Yuanyuan Liu, Hua Huang, Hongying Liu
Moreover, our theoretical analysis shows that AVR-SExtraGD enjoys the best-known convergence rates and oracle complexities of stochastic first-order algorithms such as Katyusha for both strongly convex and non-strongly convex problems.
no code implementations • 24 May 2019 • Yuanyuan Liu, Jiyao Peng, Jiabei Zeng, Shiguang Shan
Multi-view facial expression recognition (FER) is a challenging task because the appearance of an expression varies in poses.
no code implementations • 4 Dec 2018 • Rui Luo, Qiang Zhang, Yuanyuan Liu
We propose a new sampler that integrates the protocol of parallel tempering with the Nos\'e-Hoover (NH) dynamics.
no code implementations • 8 Nov 2018 • Qiang Zhang, Rui Luo, Yaodong Yang, Yuanyuan Liu
As an indicator of the level of risk or the degree of variation, volatility is important to analyse the financial market, and it is taken into consideration in various decision-making processes in financial activities.
no code implementations • 11 Oct 2018 • Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-Quan Luo, Zhouchen Lin
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment.
no code implementations • 28 Feb 2018 • Fanhua Shang, Yuanyuan Liu, James Cheng
The Schatten quasi-norm was introduced to bridge the gap between the trace norm and rank function.
no code implementations • 26 Feb 2018 • Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida
In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.
no code implementations • NeurIPS 2017 • Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao
In this paper, we propose an accelerated first-order method for geodesically convex optimization, which is the generalization of the standard Nesterov's accelerated method from Euclidean space to nonlinear Riemannian space.
no code implementations • 11 Jul 2017 • Yuanyuan Liu, Fanhua Shang, James Cheng
Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T) to O(1/T^2).
no code implementations • 16 Jun 2017 • Yaodong Yang, Rui Luo, Yuanyuan Liu
Mixed models with random effects account for the covariance structure related to the grouping hierarchy in the data.
no code implementations • 23 Mar 2017 • Fanhua Shang, Yuanyuan Liu, James Cheng, Jiacheng Zhuo
Recently, research on accelerated stochastic gradient descent methods (e. g., SVRG) has made exciting progress (e. g., linear convergence for strongly convex problems).
no code implementations • 20 Mar 2017 • Fanhua Shang, Yuanyuan Liu, James Cheng, Kelvin Kai Wing Ng, Yuichi Yoshida
In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.
no code implementations • 4 Jun 2016 • Fanhua Shang, Yuanyuan Liu, James Cheng
In this paper, we first define two tractable Schatten quasi-norms, i. e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively, which lead to the design of very efficient algorithms that only need to update two much smaller factor matrices.
no code implementations • 2 Jun 2016 • Fanhua Shang, Yuanyuan Liu, James Cheng
In this paper, we rigorously prove that for any p, p1, p2>0 satisfying 1/p=1/p1+1/p2, the Schatten-p quasi-norm of any matrix is equivalent to minimizing the product of the Schatten-p1 norm (or quasi-norm) and Schatten-p2 norm (or quasi-norm) of its two factor matrices.
no code implementations • NeurIPS 2014 • Yuanyuan Liu, Fanhua Shang, Wei Fan, James Cheng, Hong Cheng
Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM problem.
no code implementations • 3 Sep 2014 • Fanhua Shang, Yuanyuan Liu, Hanghang Tong, James Cheng, Hong Cheng
In this paper, we propose a scalable, provable structured low-rank matrix factorization method to recover low-rank and sparse matrices from missing and grossly corrupted data, i. e., robust matrix completion (RMC) problems, or incomplete and grossly corrupted measurements, i. e., compressive principal component pursuit (CPCP) problems.
no code implementations • 5 Jul 2014 • Fanhua Shang, Yuanyuan Liu, James Cheng
To address these problems, we first propose a parallel trace norm regularized tensor decomposition method, and formulate it as a convex optimization problem.