no code implementations • 25 Mar 2024 • Eli Chien, Haoyu Wang, Ziang Chen, Pan Li
Our approach achieves a similar utility under the same privacy constraint while using $2\%$ and $10\%$ of the gradient computations compared with the state-of-the-art gradient-based approximate unlearning methods for mini-batch and full-batch settings, respectively.
no code implementations • 14 Feb 2024 • Ziang Chen, Rong Ge
In this work, we study the mean-field flow for learning subspace-sparse polynomials using stochastic gradient descent and two-layer neural networks, where the input distribution is standard Gaussian and the output only depends on the projection of the input onto a low-dimensional subspace.
no code implementations • 11 Feb 2024 • Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin
Graph neural networks (GNNs) have been widely used to predict properties and heuristics of mixed-integer linear programs (MILPs) and hence accelerate MILP solvers.
no code implementations • 18 Jan 2024 • Eli Chien, Haoyu Wang, Ziang Chen, Pan Li
We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems.
1 code implementation • 19 Oct 2022 • Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years.
1 code implementation • 25 Sep 2022 • Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin
In particular, the graph neural network (GNN) is considered a suitable ML model for optimization problems whose variables and constraints are permutation--invariant, for example, the linear program (LP).
no code implementations • 11 Mar 2022 • Ziang Chen
Introduced dual reparametrized variational mechanisms on variational autoencoder (VAE) to tighter the evidence lower bound (ELBO) of the model, prove the advance performance analytically.
no code implementations • 25 Jan 2022 • Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou
Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.
no code implementations • NeurIPS 2021 • Ziang Chen, Jianfeng Lu, Yulong Lu
Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.