no code implementations • 21 Jul 2023 • Toan N. Nguyen, Phuong Ha Nguyen, Lam M. Nguyen, Marten van Dijk
In this paper, we propose {\em a new ALC and provide rigorous DP proofs for both BC and ALC}.
no code implementations • 12 Dec 2022 • Marten van Dijk, Phuong Ha Nguyen, Toan N. Nguyen, Lam M. Nguyen
Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach.
no code implementations • 17 Feb 2021 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Phuong Ha Nguyen
Generally, DP-SGD is $(\epsilon\leq 1/2,\delta=1/N)$-DP if $\sigma=\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ with $T$ at least $\approx 2k^2/\epsilon$ and $(2/e)^2k^2-1/2\geq \ln(N)$, where $T$ is the total number of rounds, and $K=kN$ is the total number of gradient computations where $k$ measures $K$ in number of epochs of size $N$ of the local data set.
no code implementations • 27 Oct 2020 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen
We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides.
no code implementations • 17 Jul 2020 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen
The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication.