Search Results for author: Niklas Nolte

Found 7 papers, 2 papers with code

Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors

no code implementations2 Feb 2024 Samuel Stevens, Emily Wenger, Cathy Li, Niklas Nolte, Eshika Saxena, François Charton, Kristin Lauter

Our architecture improvements enable scaling to larger-dimension LWE problems: this work is the first instance of ML attacks recovering sparse binary secrets in dimension $n=1024$, the smallest dimension used in practice for homomorphic encryption applications of LWE where sparse binary secrets are proposed.

Math

DiSK: A Diffusion Model for Structured Knowledge

no code implementations8 Dec 2023 Ouail Kitouni, Niklas Nolte, James Hensman, Bhaskar Mitra

We introduce Diffusion Models of Structured Knowledge (DiSK) - a new architecture and training approach specialized for structured data.

Imputation Inductive Bias

Expressive Monotonic Neural Networks

1 code implementation14 Jul 2023 Ouail Kitouni, Niklas Nolte, Michael Williams

The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior.

Fairness Inductive Bias

NuCLR: Nuclear Co-Learned Representations

no code implementations9 Jun 2023 Ouail Kitouni, Niklas Nolte, Sokratis Trifinopoulos, Subhash Kantamneni, Mike Williams

We introduce Nuclear Co-Learned Representations (NuCLR), a deep learning model that predicts various nuclear observables, including binding and decay energies, and nuclear charge radii.

Finding NEEMo: Geometric Fitting using Neural Estimation of the Energy Mover's Distance

no code implementations30 Sep 2022 Ouail Kitouni, Niklas Nolte, Mike Williams

We present a new and interesting direction for this architecture: estimation of the Wasserstein metric (Earth Mover's Distance) in optimal transport by employing the Kantorovich-Rubinstein duality to enable its use in geometric fitting applications.

Robust and Provably Monotonic Networks

no code implementations30 Nov 2021 Ouail Kitouni, Niklas Nolte, Mike Williams

The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric for assessing the robustness of the model.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.