no code implementations • 6 Feb 2024 • Alexander Mathiasen, Hatem Helal, Paul Balanca, Adam Krzywaniak, Ali Parviz, Frederik Hvilshøj, Blazej Banaszewski, Carlo Luschi, Andrew William Fitzgibbon
For comparison, Sch\"utt et al. (2019) spent 626 hours creating a dataset on which they trained their NN for 160h, for a total of 786h; our method achieves comparable performance within 31h.
1 code implementation • 30 Oct 2021 • Frederik Hvilshøj, Alexandros Iosifidis, Ira Assent
As counterfactual examples become increasingly popular for explaining decisions of deep learning models, it is essential to understand what properties quantitative evaluation metrics do capture and equally important what they do not capture.
1 code implementation • 25 Mar 2021 • Frederik Hvilshøj, Alexandros Iosifidis, Ira Assent
Counterfactual examples identify how inputs can be altered to change the predicted class of a classifier, thus opening up the black-box nature of, e. g., deep neural networks.
no code implementations • 30 Sep 2020 • Alexander Mathiasen, Frederik Hvilshøj
Orthogonal weight matrices are used in many areas of deep learning.
no code implementations • 29 Sep 2020 • Alexander Mathiasen, Frederik Hvilshøj
Using FID as an additional loss for Generative Adversarial Networks improves their FID.
1 code implementation • NeurIPS 2020 • Alexander Mathiasen, Frederik Hvilshøj, Jakob Rødsgaard Jørgensen, Anshul Nasery, Davide Mottin
We present an algorithm that is fast enough to speed up several matrix operations.
1 code implementation • 12 Sep 2020 • Tiago Botari, Frederik Hvilshøj, Rafael Izbicki, Andre C. P. L. F. de Carvalho
Additionally, we introduce modifications to standard training algorithms of local interpretable models fostering more robust explanations, even allowing the production of counterfactual examples.