no code implementations • 20 Apr 2023 • Antoine Gonon, Léon Zheng, Clément Lalanne, Quoc-Tung Le, Guillaume Lauga, Can Pouliquen
This article measures how sparsity can make neural networks more robust to membership inference attacks.
no code implementations • 11 Apr 2023 • Antoine Gonon, Léon Zheng, Clément Lalanne, Quoc-Tung Le, Guillaume Lauga, Can Pouliquen
This article measures how sparsity can make neural networks more robust to membership inference attacks.
1 code implementation • 28 Jul 2022 • Léon Zheng, Gilles Puy, Elisa Riccietti, Patrick Pérez, Rémi Gribonval
We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels on the hypersphere (also known as dot-product kernels) for self-supervised learning of image representations.
no code implementations • 4 Oct 2021 • Léon Zheng, Elisa Riccietti, Rémi Gribonval
In particular, in the case of fixed-support sparse matrix factorization, we give a general sufficient condition for identifiability based on rank-one matrix completability, and we derive from it a completion algorithm that can verify if this sufficient condition is satisfied, and recover the entries in the two sparse factors if this is the case.
1 code implementation • 4 Oct 2021 • Léon Zheng, Elisa Riccietti, Rémi Gribonval
Our main contribution is to prove that any $N \times N$ matrix having the so-called butterfly structure admits an essentially unique factorization into $J$ butterfly factors (where $N = 2^{J}$), and that the factors can be recovered by a hierarchical factorization method, which consists in recursively factorizing the considered matrix into two factors.