Search Results for author: Ziang Chen

Found 9 papers, 2 papers with code

Stochastic Gradient Langevin Unlearning

no code implementations25 Mar 2024 Eli Chien, Haoyu Wang, Ziang Chen, Pan Li

Our approach achieves a similar utility under the same privacy constraint while using $2\%$ and $10\%$ of the gradient computations compared with the state-of-the-art gradient-based approximate unlearning methods for mini-batch and full-batch settings, respectively.

Machine Unlearning

Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input

no code implementations14 Feb 2024 Ziang Chen, Rong Ge

In this work, we study the mean-field flow for learning subspace-sparse polynomials using stochastic gradient descent and two-layer neural networks, where the input distribution is standard Gaussian and the output only depends on the projection of the input onto a low-dimensional subspace.

Rethinking the Capacity of Graph Neural Networks for Branching Strategy

no code implementations11 Feb 2024 Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin

Graph neural networks (GNNs) have been widely used to predict properties and heuristics of mixed-integer linear programs (MILPs) and hence accelerate MILP solvers.

Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning

no code implementations18 Jan 2024 Eli Chien, Haoyu Wang, Ziang Chen, Pan Li

We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems.

Machine Unlearning

On Representing Mixed-Integer Linear Programs by Graph Neural Networks

1 code implementation19 Oct 2022 Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin

While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years.

On Representing Linear Programs by Graph Neural Networks

1 code implementation25 Sep 2022 Ziang Chen, Jialin Liu, Xinshang Wang, Jianfeng Lu, Wotao Yin

In particular, the graph neural network (GNN) is considered a suitable ML model for optimization problems whose variables and constraints are permutation--invariant, for example, the linear program (LP).

Dual reparametrized Variational Generative Model for Time-Series Forecasting

no code implementations11 Mar 2022 Ziang Chen

Introduced dual reparametrized variational mechanisms on variational autoencoder (VAE) to tighter the evidence lower bound (ELBO) of the model, prove the advance performance analytically.

Denoising Time Series +1

A Regularity Theory for Static Schrödinger Equations on $\mathbb{R}^d$ in Spectral Barron Spaces

no code implementations25 Jan 2022 Ziang Chen, Jianfeng Lu, Yulong Lu, Shengxuan Zhou

Spectral Barron spaces have received considerable interest recently as it is the natural function space for approximation theory of two-layer neural networks with a dimension-free convergence rate.

On the Representation of Solutions to Elliptic PDEs in Barron Spaces

no code implementations NeurIPS 2021 Ziang Chen, Jianfeng Lu, Yulong Lu

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments.

Cannot find the paper you are looking for? You can Submit a new open access paper.