You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 1 Dec 2022 • YiSi Luo, XiLe Zhao, Zhemin Li, Michael K. Ng, Deyu Meng

To break this barrier, we propose a low-rank tensor function representation (LRTFR), which can continuously represent data beyond meshgrid with infinite resolution.

no code implementations • 16 Nov 2022 • Yihang Gao, Ka Chun Cheung, Michael K. Ng

Physics-informed neural networks (PINNs) have attracted significant attention for solving partial differential equations (PDEs) in recent years because they alleviate the curse of dimensionality that appears in traditional methods.

no code implementations • 25 Oct 2022 • Junren Chen, Michael K. Ng

Moreover, we develop a variant of QWF that can effectively utilize a pure quaternion priori (e. g., for color images) by incorporating a quaternion phase factor estimate into QWF iterations.

no code implementations • 12 Oct 2022 • Yihang Gao, Michael K. Ng

The cubic regularization method (CR) and its adaptive version (ARC) are popular Newton-type methods in solving unconstrained non-convex optimization problems, due to its global convergence to local minima under mild conditions.

no code implementations • 27 Sep 2022 • Yihang Gao, Man-Chung Yue, Michael K. Ng

In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient.

1 code implementation • 19 Aug 2022 • William T. Ng, K. Siu, Albert C. Cheung, Michael K. Ng

We further visualize the embedded dynamic graphs to illustrate the graph representation power of TSAT.

Multivariate Time Series Forecasting Representation Learning

no code implementations • 17 Aug 2022 • Xiongjun Zhang, Michael K. Ng

In this paper, we propose a sparse nonnegative Tucker decomposition and completion method for the recovery of underlying nonnegative data under noisy observations.

no code implementations • 28 Jul 2022 • Junjun Pan, Michael K. Ng

To determine the source factor matrix in quaternion space, we propose a heuristic algorithm called quaternion successive projection algorithm (QSPA) inspired by the successive projection algorithm.

no code implementations • 27 May 2022 • Junren Chen, Michael K. Ng

In NGPR, we show $O\big(\|\eta\|\frac{\sqrt{d}}{n}\big)$ for arbitrary $\eta$.

no code implementations • 23 May 2022 • Yihang Gao, Huafeng Liu, Michael K. Ng, Mingjie Zhou

Wide applications of differentiable two-player sequential games (e. g., image generation by GANs) have raised much interest and attention of researchers to study efficient and fast algorithms.

no code implementations • 26 Feb 2022 • Junren Chen, Cheng-Long Wang, Michael K. Ng, Di Wang

As canonical examples, the quantization scheme is applied to three estimation problems: sparse covariance matrix estimation, sparse linear regression, and matrix completion.

no code implementations • 4 Feb 2022 • Junren Chen, Michael K. Ng

To fill the theoretical vacancy, we obtain the error bound in both clean and corrupted regimes, which relies on some new results of quaternion matrices.

no code implementations • 25 Nov 2021 • Wen Yu, Baiying Lei, Yanyan Shen, Shuqiang Wang, Yong liu, Zhiguang Feng, Yong Hu, Michael K. Ng

In this work, a novel Multidirectional Perception Generative Adversarial Network (MP-GAN) is proposed to visualize the morphological features indicating the severity of AD for patients of different stages.

no code implementations • 18 Sep 2021 • Lina Zhuang, Michael K. Ng

This paper introduces a fast and parameter-free hyperspectral image mixed noise removal method (termed FastHyMix), which characterizes the complex distribution of mixed noise by using a Gaussian mixture model and exploits two main characteristics of hyperspectral data, namely low-rankness in the spectral domain and high correlation in the spatial domain.

no code implementations • 8 Sep 2021 • Junren Chen, Michael K. Ng

Given a measurement matrix $\textbf{A}\in \mathbb{C}^{m\times d}$, this paper studies the phase-only reconstruction problem where the aim is to recover a complex signal $\textbf{x}$ in $\mathbb{C}^d$ from the phase of $\textbf{Ax}$.

no code implementations • 2 Sep 2021 • Junjun Pan, Michael K. Ng

It aims to find a low rank approximation for nonnegative data M by a product of two nonnegative matrices W and H. In general, NMF is NP-hard to solve while it can be solved efficiently under separability assumption, which requires the columns of factor matrix are equal to columns of the input matrix.

1 code implementation • 30 Aug 2021 • Yihang Gao, Michael K. Ng

In this paper, we study a physics-informed algorithm for Wasserstein Generative Adversarial Networks (WGANs) for uncertainty quantification in solutions of partial differential equations.

no code implementations • 29 May 2021 • Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yi Chang, Michael K. Ng, Chao Li

Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications.

no code implementations • 18 Mar 2021 • Yihang Gao, Michael K. Ng, Mingjie Zhou

According to our theoretical results, WGAN has higher requirement for the capacity of discriminators than that of generators, which is consistent with some existing theories.

no code implementations • 16 Dec 2020 • Guang-Jing Song, Michael K. Ng, Xiongjun Zhang

The main aim of this paper is to study $n_1 \times n_2 \times n_3$ third-order tensor completion based on transformed tensor singular value decomposition, and provide a bound on the number of required sample entries.

no code implementations • 17 Nov 2020 • Zhigang Jia, Qiyu Jin, Michael K. Ng, XiLe Zhao

A new patch group based NSS prior scheme is proposed to learn explicit NSS models of natural color images.

no code implementations • 26 Sep 2020 • Tai-Xiang Jiang, Xi-Le Zhao, Hao Zhang, Michael K. Ng

In this paper, we propose a novel tensor learning and coding model for third-order data completion.

no code implementations • 2 Sep 2020 • Guangjing Song, Michael K. Ng, Tai-Xiang Jiang

In this paper, we develop a new alternating projection method to compute nonnegative low rank matrix approximation for nonnegative matrices.

no code implementations • 3 Aug 2020 • Wen Yu, Baiying Lei, Michael K. Ng, Albert C. Cheung, Yanyan Shen, Shuqiang Wang

To the best of our knowledge, the proposed Tensor-train, High-pooling and Semi-supervised learning based GAN (THS-GAN) is the first work to deal with classification on MRI images for AD diagnosis.

no code implementations • 28 Jul 2020 • Tai-Xiang Jiang, Michael K. Ng, Junjun Pan, Guangjing Song

The main aim of this paper is to develop a new algorithm for computing nonnegative low rank tensor approximation for nonnegative tensors that arise in many multi-dimensional imaging applications.

no code implementations • 21 Jul 2020 • Xiongjun Zhang, Michael K. Ng

We propose to minimize the sum of the maximum likelihood estimation for the observations with nonnegativity constraints and the tensor $\ell_0$ norm for the sparse factor.

no code implementations • 29 Apr 2020 • Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma

The TT rank minimization accompany with \emph{ket augmentation}, which transforms a lower-order tensor (e. g., visual data) into a higher-order tensor, suffers from serious block-artifacts.

no code implementations • 21 Oct 2019 • Junjun Pan, Michael K. Ng, Ye Liu, Xiongjun Zhang, Hong Yan

In this paper, we study the nonnegative tensor data and propose an orthogonal nonnegative Tucker decomposition (ONTD).

no code implementations • 18 Oct 2019 • Jie Zhang, Yuping Duan, Yue Lu, Michael K. Ng, Huibin Chang

In this paper, we propose new operator-splitting algorithms for the total variation regularized infimal convolution (TV-IC) model [4] in order to remove mixed Poisson-Gaussian(MPG) noise.

no code implementations • 16 Sep 2019 • Tai-Xiang Jiang, Michael K. Ng, Xi-Le Zhao, Ting-Zhu Huang

In the literature, the tensor nuclear norm can be computed by using tensor singular value decomposition based on the discrete Fourier transform matrix, and tensor completion can be performed by the minimization of the tensor nuclear norm which is the relaxation of the sum of matrix ranks from all Fourier transformed matrix frontal slices.

no code implementations • 2 Jul 2019 • Guangjing Song, Michael K. Ng, Xiongjun Zhang

In this paper, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD.

no code implementations • 28 Dec 2017 • Jonathan Q. Jiang, Michael K. Ng

Tensor robust principal component analysis (TRPCA) has received a substantial amount of attention in various fields.

no code implementations • 2 Aug 2017 • Jonathan Q. Jiang, Michael K. Ng

This paper conducts a rigorous analysis for provable estimation of multidimensional arrays, in particular third-order tensors, from a random subset of its corrupted entries.

no code implementations • 16 Jan 2017 • Xiaowei Zhang, Delin Chu, Li-Zhi Liao, Michael K. Ng

Our algorithm is based on a relationship between kernel CCA and least squares.

no code implementations • CVPR 2015 • Liping Jing, Liu Yang, Jian Yu, Michael K. Ng

SLRM model takes advantage of the nuclear norm regularization on mapping to effectively capture the label correlations.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.