no code implementations • 27 May 2024 • Meng Ding, Kaiyi Ji, Di Wang, Jinhui Xu
In this paper, we provide a general theoretical analysis of forgetting in the linear regression model via Stochastic Gradient Descent (SGD) applicable to both underparameterized and overparameterized regimes.
no code implementations • 11 Oct 2023 • Liyang Zhu, Meng Ding, Vaneet Aggarwal, Jinhui Xu, Di Wang
To address these issues, we first consider the problem in the $\epsilon$ non-interactive LDP model and provide a lower bound of $\Omega(\frac{\sqrt{dk\log d}}{\sqrt{n}\epsilon})$ on the $\ell_2$-norm estimation error for sub-Gaussian data, where $n$ is the sample size and $d$ is the dimension of the space.
no code implementations • 3 Oct 2022 • Meng Ding, Mingxi Lei, Yunwen Lei, Di Wang, Jinhui Xu
In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem.
no code implementations • 8 May 2022 • Meng Ding, Xiao Fu, Xi-Le Zhao
However, existing LL1-based HU algorithms use a three-factor parameterization of the tensor (i. e., the hyperspectral image cube), which leads to a number of challenges including high per-iteration complexity, slow convergence, and difficulties in incorporating structural prior information.
1 code implementation • 9 Apr 2022 • Jiangyun Li, Sen Zha, Chen Chen, Meng Ding, Tianxiang Zhang, Hong Yu
First, commonly used upsampling methods in the decoder such as interpolation and deconvolution suffer from a local receptive field, unable to encode global contexts.
1 code implementation • 29 Mar 2022 • Jiangyun Li, Hong Yu, Chen Chen, Meng Ding, Sen Zha
In this model, we design a Supervised Attention Module (SAM) based on the attention mechanism, which can capture more accurate and stable long-range dependency in feature maps without introducing much computational cost.
2 code implementations • 7 Mar 2021 • Wenxuan Wang, Chen Chen, Meng Ding, Jiangyun Li, Hong Yu, Sen Zha
To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps.
Ranked #1 on Brain Tumor Segmentation on BRATS 2019 (Dice Score metric)
no code implementations • 18 Jun 2020 • Meng Ding, Xiao Fu, Ting-Zhu Huang, Jun Wang, Xi-Le Zhao
This work employs an idea that models spectral images as tensors following the block-term decomposition model with multilinear rank-$(L_r, L_r, 1)$ terms (i. e., the LL1 model) and formulates the HSR problem as a coupled LL1 tensor decomposition problem.
no code implementations • 16 Jun 2020 • Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin, Pengtao Xie
To address this problem, we propose federated neural architecture search (FNAS), where different parties collectively search for a differentiable architecture by exchanging gradients of architecture variables without exposing their data to other parties.
no code implementations • 14 May 2020 • Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Tian-Hui Ma
Key words: nonconvex optimization, tensor ring rank, logdet function, tensor completion, alternating direction method of multipliers.
no code implementations • 29 Apr 2020 • Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma
The TT rank minimization accompany with \emph{ket augmentation}, which transforms a lower-order tensor (e. g., visual data) into a higher-order tensor, suffers from serious block-artifacts.
1 code implementation • 6 Apr 2019 • Chen Chen, Xiaopeng Liu, Meng Ding, Junfeng Zheng, Jiangyun Li
In this work, we aim to segment brain MRI volumes.
no code implementations • 5 Mar 2018 • Meng Ding, Guoliang Fan
We present a novel parametric 3D shape representation, Generalized sum of Gaussians (G-SoG), which is particularly suitable for pose estimation of articulated objects.