no code implementations • 2 Feb 2024 • Samuel Stevens, Emily Wenger, Cathy Li, Niklas Nolte, Eshika Saxena, François Charton, Kristin Lauter
Our architecture improvements enable scaling to larger-dimension LWE problems: this work is the first instance of ML attacks recovering sparse binary secrets in dimension $n=1024$, the smallest dimension used in practice for homomorphic encryption applications of LWE where sparse binary secrets are proposed.
no code implementations • 8 Dec 2023 • Ouail Kitouni, Niklas Nolte, James Hensman, Bhaskar Mitra
We introduce Diffusion Models of Structured Knowledge (DiSK) - a new architecture and training approach specialized for structured data.
1 code implementation • 14 Jul 2023 • Ouail Kitouni, Niklas Nolte, Michael Williams
The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior.
no code implementations • 9 Jun 2023 • Ouail Kitouni, Niklas Nolte, Sokratis Trifinopoulos, Subhash Kantamneni, Mike Williams
We introduce Nuclear Co-Learned Representations (NuCLR), a deep learning model that predicts various nuclear observables, including binding and decay energies, and nuclear charge radii.
no code implementations • 30 Sep 2022 • Ouail Kitouni, Niklas Nolte, Mike Williams
We present a new and interesting direction for this architecture: estimation of the Wasserstein metric (Earth Mover's Distance) in optimal transport by employing the Kantorovich-Rubinstein duality to enable its use in geometric fitting applications.
1 code implementation • 20 May 2022 • Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, Mike Williams
We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set.
no code implementations • 30 Nov 2021 • Ouail Kitouni, Niklas Nolte, Mike Williams
The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric for assessing the robustness of the model.