Search Results for author: Toan N. Nguyen

Found 5 papers, 0 papers with code

Generalizing DP-SGD with Shuffling and Batch Clipping

no code implementations12 Dec 2022 Marten van Dijk, Phuong Ha Nguyen, Toan N. Nguyen, Lam M. Nguyen

Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach.

Proactive DP: A Multple Target Optimization Framework for DP-SGD

no code implementations17 Feb 2021 Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Phuong Ha Nguyen

Generally, DP-SGD is $(\epsilon\leq 1/2,\delta=1/N)$-DP if $\sigma=\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ with $T$ at least $\approx 2k^2/\epsilon$ and $(2/e)^2k^2-1/2\geq \ln(N)$, where $T$ is the total number of rounds, and $K=kN$ is the total number of gradient computations where $k$ measures $K$ in number of epochs of size $N$ of the local data set.

2k

Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes

no code implementations27 Oct 2020 Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen

We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides.

Cannot find the paper you are looking for? You can Submit a new open access paper.