no code implementations • 30 Sep 2020 • Alexander Mathiasen, Frederik Hvilshøj
Orthogonal weight matrices are used in many areas of deep learning.
1 code implementation • NeurIPS 2020 • Alexander Mathiasen, Frederik Hvilshøj, Jakob Rødsgaard Jørgensen, Anshul Nasery, Davide Mottin
We present an algorithm that is fast enough to speed up several matrix operations.
no code implementations • 29 Sep 2020 • Alexander Mathiasen, Frederik Hvilshøj
Using FID as an additional loss for Generative Adversarial Networks improves their FID.
no code implementations • NeurIPS 2019 • Allan Grønlund, Lior Kamma, Kasper Green Larsen, Alexander Mathiasen, Jelani Nelson
To date, the strongest known generalization (upper bound) is the $k$th margin bound of Gao and Zhou (2013).
no code implementations • 30 Jan 2019 • Allan Grønlund, Kasper Green Larsen, Alexander Mathiasen
A common goal in a long line of research, is to maximize the smallest margin using as few base hypotheses as possible, culminating with the AdaBoostV algorithm by (R{\"a}tsch and Warmuth [JMLR'04]).
1 code implementation • 25 Jan 2017 • Allan Grønlund, Kasper Green Larsen, Alexander Mathiasen, Jesper Sindahl Nielsen, Stefan Schneider, Mingzhou Song
We present all the existing work that had been overlooked and compare the various solutions theoretically.