Search Results for author: Alexander Mathiasen

Found 8 papers, 3 papers with code

Reducing the Cost of Quantum Chemical Data By Backpropagating Through Density Functional Theory

no code implementations6 Feb 2024 Alexander Mathiasen, Hatem Helal, Paul Balanca, Adam Krzywaniak, Ali Parviz, Frederik Hvilshøj, Blazej Banaszewski, Carlo Luschi, Andrew William Fitzgibbon

For comparison, Sch\"utt et al. (2019) spent 626 hours creating a dataset on which they trained their NN for 160h, for a total of 786h; our method achieves comparable performance within 31h.

Generating QM1B with PySCF$_{\text{IPU}}$

2 code implementations NeurIPS 2023 Alexander Mathiasen, Hatem Helal, Kerstin Klaser, Paul Balanca, Josef Dean, Carlo Luschi, Dominique Beaini, Andrew Fitzgibbon, Dominic Masters

Similar benefits are yet to be unlocked for quantum chemistry, where the potential of deep learning is constrained by comparatively small datasets with 100k to 20M training examples.

One Reflection Suffice

no code implementations30 Sep 2020 Alexander Mathiasen, Frederik Hvilshøj

Orthogonal weight matrices are used in many areas of deep learning.

Backpropagating through Fréchet Inception Distance

no code implementations29 Sep 2020 Alexander Mathiasen, Frederik Hvilshøj

Using FID as an additional loss for Generative Adversarial Networks improves their FID.

Optimal Minimal Margin Maximization with Boosting

no code implementations30 Jan 2019 Allan Grønlund, Kasper Green Larsen, Alexander Mathiasen

A common goal in a long line of research, is to maximize the smallest margin using as few base hypotheses as possible, culminating with the AdaBoostV algorithm by (R{\"a}tsch and Warmuth [JMLR'04]).

Cannot find the paper you are looking for? You can Submit a new open access paper.