no code implementations • 13 Nov 2023 • Jingtong Su, Ya Shi Zhang, Nikolaos Tsilivis, Julia Kempe
Neural Collapse refers to the curious phenomenon in the end of training of a neural network, where feature vectors and classification weights converge to a very simple geometrical arrangement (a simplex).
no code implementations • 5 Jul 2023 • Francesco Cagnetta, Deborah Oliveira, Mahalakshmi Sabanayagam, Nikolaos Tsilivis, Julia Kempe
Lecture notes from the course given by Professor Julia Kempe at the summer school "Statistical physics of Machine Learning" in Les Houches.
no code implementations • 19 Apr 2023 • Jingtong Su, Julia Kempe
2) Replacing the front-end VOneBlock by an off-the-shelf parameter-free Scatternet followed by simple uniform Gaussian noise can achieve much more substantial adversarial robustness without adversarial training.
1 code implementation • 11 Oct 2022 • Nikolaos Tsilivis, Julia Kempe
The adversarial vulnerability of neural nets, and subsequent techniques to create robust models have attracted significant attention; yet we still lack a full understanding of this phenomenon.
no code implementations • 5 Oct 2022 • Dhrupad Bhardwaj, Julia Kempe, Artem Vysogorets, Angela M. Teng, Evaristus C. Ezekwem
Starting from existing work on network masking (Wortsman et al., 2020), we show that simply learning a linear combination of a small number of task-specific supermasks (impressions) on a randomly initialized backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on unseen tasks.
1 code implementation • 24 Jul 2022 • Nikolaos Tsilivis, Jingtong Su, Julia Kempe
In parallel, we revisit prior work that also focused on the problem of data optimization for robust classification \citep{Ily+19}, and show that being robust to adversarial attacks after standard (gradient descent) training on a suitable dataset is more challenging than previously thought.
no code implementations • 29 Sep 2021 • Nikolaos Tsilivis, Julia Kempe
In particular, in the regime where the Neural Tangent Kernel theory holds, we derive a simple, but powerful strategy for attacking models, which in contrast to prior work, does not require any access to the model under attack, or any trained replica of it for that matter.
1 code implementation • 5 Jul 2021 • Artem Vysogorets, Julia Kempe
Neural network pruning is a fruitful area of research with surging interest in high sparsity regimes.
1 code implementation • 13 Mar 2003 • Julia Kempe
This article aims to provide an introductory survey on quantum random walks.
Quantum Physics Data Structures and Algorithms
no code implementations • 18 Dec 2000 • Dorit Aharonov, Andris Ambainis, Julia Kempe, Umesh Vazirani
We set the ground for a theory of quantum walks on graphs- the generalization of random walks on finite graphs to the quantum world.
Quantum Physics
no code implementations • 15 Aug 2000 • Dave Bacon, Andrew M. Childs, Isaac L. Chuang, Julia Kempe, Debbie W. Leung, Xinlan Zhou
Although the conditions for performing arbitrary unitary operations to simulate the dynamics of a closed quantum system are well understood, the same is not true of the more general class of quantum operations (also known as superoperators) corresponding to the dynamics of open quantum systems.
Quantum Physics