1 code implementation • 1 Sep 2024 • Derian Boer, Fabian Koch, Stefan Kramer
This distinguishes it from related methods that are purely based on latent representations.
no code implementations • 4 Jul 2024 • Mattia Cerrato, Marius Köppel, Philipp Wolf, Stefan Kramer
In this paper, we look back at the first ten years of FRL by i) revisiting its theoretical standing in light of recent work in deep learning theory that shows the hardness of removing information in neural network representations and ii) presenting the results of a massive experimentation (225. 000 model fits and 110. 000 AutoML fits) we conducted with the objective of improving on the common evaluation scenario for FRL.
no code implementations • 13 Feb 2024 • Cedric Derstroff, Jannis Brugger, Jannis Blüml, Mira Mezini, Stefan Kramer, Kristian Kersting
It strategically allocates computational resources to focus on promising segments of the search tree, making it a very attractive search algorithm in large search spaces.
1 code implementation • 15 Dec 2023 • Cedric Derstroff, Mattia Cerrato, Jannis Brugger, Jan Peters, Stefan Kramer
Eventually, we analyze the learning behavior of the peers and observe their ability to rank the agents' performance within the study group and understand which agents give reliable advice.
no code implementations • 3 May 2023 • Stefan Kramer, Mattia Cerrato, Sašo Džeroski, Ross King
The paper surveys automated scientific discovery, from equation discovery and symbolic regression to autonomous discovery systems and agents.
no code implementations • 4 Nov 2022 • Lukas Pensel, Stefan Kramer
We demonstrate the increased predictive performance by comparing N-RELAGGS with RELAGGS and multiple other state-of-the-art algorithms.
1 code implementation • 31 Aug 2022 • Tony Hauptmann, Stefan Kramer
One important parameter is the depth of integration: the point at which the latent representations are computed or merged, which can be either early, intermediate, or late.
no code implementations • 4 Aug 2022 • Mattia Cerrato, Marius Köppel, Roberto Esposito, Stefan Kramer
In this paper, we propose a methodology for direct computation of the mutual information between a neural layer and a sensitive attribute.
no code implementations • 7 Feb 2022 • Mattia Cerrato, Alesia Vallenas Coronel, Marius Köppel, Alexander Segner, Roberto Esposito, Stefan Kramer
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information.
no code implementations • 17 Jan 2022 • Mattia Cerrato, Marius Köppel, Alexander Segner, Stefan Kramer
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information.
no code implementations • 17 Jan 2022 • Mattia Cerrato, Marius Köppel, Alexander Segner, Stefan Kramer
In this context, one of the possible approaches is to employ fair representation learning algorithms which are able to remove biases from data, making groups statistically indistinguishable.
no code implementations • 15 Apr 2021 • Ernst Althaus, Mohammad Sadeq Dousti, Stefan Kramer, Nick Johannes Peter Rassau
A sum-product network (SPN) is a graphical model that allows several types of inferences to be drawn efficiently.
no code implementations • 16 Feb 2021 • Atif Raza, Stefan Kramer
The asymptotic time complexity of subsequence-based algorithms remains a higher-order polynomial, because these algorithms are based on exhaustive search for highly discriminative subsequences.
no code implementations • 3 Feb 2021 • Patrick Abels, Zahra Ahmadi, Sophie Burkhardt, Benjamin Schiller, Iryna Gurevych, Stefan Kramer
We use a topic model to extract topic- and sentence-specific evidence from the structured knowledge base Wikidata, building a graph based on the cosine similarity between the entity word vectors of Wikidata and the vector of the given sentence.
no code implementations • 11 Jan 2021 • Antoine Garcon, Julian Vexler, Dmitry Budker, Stefan Kramer
We show that, because DNNs can find useful abstract feature representations, they can be used when prior knowledge about the signal-generating process exists, but is not complete, as it is particularly the case in "new-physics" searches.
no code implementations • 28 Dec 2020 • Julia Siekiera, Stefan Kramer
The exploration of selected single nucleotide polymorphisms (SNPs) to identify genetic diversity between different sequencing population pools (Pool-seq) is a fundamental task in genetic research.
no code implementations • 15 Dec 2020 • Sophie Burkhardt, Jannis Brugger, Nicolas Wagner, Zahra Ahmadi, Kristian Kersting, Stefan Kramer
Most deep neural networks are considered to be black boxes, meaning their output is hard to interpret.
no code implementations • 23 Oct 2020 • Julia Siekiera, Marius Köppel, Edwin Simpson, Kevin Stowe, Iryna Gurevych, Stefan Kramer
We therefore adapt the DirectRanker to provide a new deep model for ranking creative language with small data.
no code implementations • 2 Jun 2020 • Derian Boer, Stefan Kramer
Deep learning (DL) approaches are achieving extraordinary results in a wide range of domains, but often require a massive collection of private data.
no code implementations • 2 Mar 2020 • Hermann Kaindl, Stefan Kramer
As a result, the quantitatively determined bound on the probability of a classification error of an ML component in a safety-critical system contributes in a well-defined way to the latter's overall safety verification.
1 code implementation • 6 Sep 2019 • Marius Köppel, Alexander Segner, Martin Wagener, Lukas Pensel, Andreas Karwath, Stefan Kramer
We present a pairwise learning to rank approach based on a neural net, called DirectRanker, that generalizes the RankNet architecture.
no code implementations • 4 Apr 2018 • Zahra Ahmadi, Stefan Kramer
Many modern applications deal with multi-label data, such as functional categorizations of genes, image labeling and text categorization.
1 code implementation • 22 Feb 2017 • Atif Raza, Stefan Kramer
Using random sampling reduces the number of evaluated candidates and consequently the required computational cost, while the classification accuracy of the resulting models is also not significantly different than that of the exact algorithm.
no code implementations • 3 Jun 2016 • Junming Shao, Qinli Yang, Jinhu Liu, Stefan Kramer
We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective.
Social and Information Networks Physics and Society