no code implementations • 4 Jun 2023 • Celine Wald, Lukas Pfahler
This work utilises the flaw of bias in language models to explore the biases of six different online communities.
no code implementations • 22 Aug 2022 • Lukas Pfahler, Katharina Morik
This is mainly due to the retrieval with keyword queries: technical terms differ in different sciences or at different times.
1 code implementation • 13 Sep 2021 • Lukas Pfahler, Katharina Morik
We propose a novel explanation method that explains the decisions of a deep neural network by investigating how the intermediate representations at each layer of the deep network were refined during the training process.
no code implementations • 30 Aug 2021 • Lukas Pfahler, Mirko Bunse, Katharina Morik
Gamma hadron classification, a central machine learning task in gamma ray astronomy, is conventionally tackled with supervised learning.
no code implementations • 2 Feb 2021 • Sebastian Buschjäger, Jian-Jia Chen, Kuan-Hsun Chen, Mario Günzel, Katharina Morik, Rodion Novkin, Lukas Pfahler, Mikail Yayla
In this study, our objective is to investigate the internal changes in the NNs that bit flip training causes, with a focus on binarized NNs (BNNs).
no code implementations • 1 Jan 2021 • Lukas Pfahler, Katharina Morik
Our experiments show that the features we can extract this way are significantly less predictive of the news outlet and thus offer the possibility to reduce the risk of manifestation of new filter bubbles.
2 code implementations • 5 Nov 2020 • Sebastian Buschjäger, Lukas Pfahler, Katharina Morik
Ensemble algorithms offer state of the art performance in many machine learning applications.
1 code implementation • 20 Oct 2020 • Sebastian Buschjäger, Philipp-Jan Honysz, Lukas Pfahler, Katharina Morik
Data summarization has become a valuable tool in understanding even terabytes of data.
no code implementations • 3 Feb 2020 • Sebastian Buschjäger, Jian-Jia Chen, Kuan-Hsun Chen, Mario Günzel, Christian Hakert, Katharina Morik, Rodion Novkin, Lukas Pfahler, Mikail Yayla
Finally, we explore the influence of a novel regularizer that optimizes with respect to this metric, with the aim of providing a configurable trade-off in accuracy and BET.
no code implementations • 28 May 2019 • Lukas Pfahler, Katharina Morik
The linear transformations in converged deep networks show fast eigenvalue decay.