1 code implementation • 12 Feb 2024 • Edwige Cyffers, Aurélien Bellet, Jalaj Upadhyay
The popularity of federated learning comes from the possibility of better scalability and the ability for participants to keep control of their data, improving data security and sovereignty.
no code implementations • 18 Jul 2023 • Monika Henzinger, Jalaj Upadhyay, Sarvagya Upadhyay
We give a constructive proof for an almost exact upper bound on the $\gamma_2$ and $\gamma_F$ norm and an almost tight lower bound on the $\gamma_2$ norm for a large class of lower-triangular matrices.
no code implementations • 9 Nov 2022 • Monika Henzinger, Jalaj Upadhyay, Sarvagya Upadhyay
Our lower bound for any continual counting mechanism is the first tight lower bound on continual counting under approximate differential privacy.
no code implementations • 4 Apr 2022 • Arun Ganesh, Abhradeep Thakurta, Jalaj Upadhyay
In this paper we provide an algorithmic framework based on Langevin diffusion (LD) and its corresponding discretizations that allow us to simultaneously obtain: i) An algorithm for sampling from the exponential mechanism, whose privacy analysis does not depend on convexity and which can be stopped at anytime without compromising privacy, and ii) tight uniform stability guarantees for the exponential mechanism.
no code implementations • 23 Feb 2022 • Hendrik Fichtenberger, Monika Henzinger, Jalaj Upadhyay
Finally, we note that our result can be used to get a fine-grained error bound for non-interactive local learning {and the first lower bounds on the additive error for $(\epsilon,\delta)$-differentially-private counting under continual observation.}
no code implementations • 6 Sep 2020 • Jalaj Upadhyay, Sarvagya Upadhyay
We give first efficient $o(W)$ space differentially private algorithms for spectral approximation, principal component analysis, and linear regression.
no code implementations • NeurIPS 2019 • Raman Arora, Jalaj Upadhyay
In this paper, we study private sparsification of graphs.
no code implementations • NeurIPS 2018 • Raman Arora, Vladimir Braverman, Jalaj Upadhyay
In this paper, we study the following robust low-rank matrix approximation problem: given a matrix $A \in \R^{n \times d}$, find a rank-$k$ matrix $B$, while satisfying differential privacy, such that $ \norm{ A - B }_p \leq \alpha \mathsf{OPT}_k(A) + \tau,$ where $\norm{ M }_p$ is the entry-wise $\ell_p$-norm and $\mathsf{OPT}_k(A):=\min_{\mathsf{rank}(X) \leq k} \norm{ A - X}_p$.
no code implementations • NeurIPS 2018 • Jalaj Upadhyay
Even though these settings are well studied without privacy, surprisingly, there are no private algorithm for these settings (except when a matrix is updated row by row).