Search Results for author: Stefan Kramer

Found 24 papers, 5 papers with code

10 Years of Fair Representations: Challenges and Opportunities

no code implementations4 Jul 2024 Mattia Cerrato, Marius Köppel, Philipp Wolf, Stefan Kramer

In this paper, we look back at the first ten years of FRL by i) revisiting its theoretical standing in light of recent work in deep learning theory that shows the hardness of removing information in neural network representations and ii) presenting the results of a massive experimentation (225. 000 model fits and 110. 000 AutoML fits) we conducted with the objective of improving on the common evaluation scenario for FRL.

AutoML Learning Theory +1

Amplifying Exploration in Monte-Carlo Tree Search by Focusing on the Unknown

no code implementations13 Feb 2024 Cedric Derstroff, Jannis Brugger, Jannis Blüml, Mira Mezini, Stefan Kramer, Kristian Kersting

It strategically allocates computational resources to focus on promising segments of the search tree, making it a very attractive search algorithm in large search spaces.

Peer Learning: Learning Complex Policies in Groups from Scratch via Action Recommendations

1 code implementation15 Dec 2023 Cedric Derstroff, Mattia Cerrato, Jannis Brugger, Jan Peters, Stefan Kramer

Eventually, we analyze the learning behavior of the peers and observe their ability to rank the agents' performance within the study group and understand which agents give reliable advice.

OpenAI Gym reinforcement-learning +1

Automated Scientific Discovery: From Equation Discovery to Autonomous Discovery Systems

no code implementations3 May 2023 Stefan Kramer, Mattia Cerrato, Sašo Džeroski, Ross King

The paper surveys automated scientific discovery, from equation discovery and symbolic regression to autonomous discovery systems and agents.

Astronomy Autonomous Driving +2

Neural RELAGGS

no code implementations4 Nov 2022 Lukas Pensel, Stefan Kramer

We demonstrate the increased predictive performance by comparing N-RELAGGS with RELAGGS and multiple other state-of-the-art algorithms.

A Fair Experimental Comparison of Neural Network Architectures for Latent Representations of Multi-Omics for Drug Response Prediction

1 code implementation31 Aug 2022 Tony Hauptmann, Stefan Kramer

One important parameter is the depth of integration: the point at which the latent representations are computed or merged, which can be either early, intermediate, or late.

Drug Response Prediction Triplet

Invariant Representations with Stochastically Quantized Neural Networks

no code implementations4 Aug 2022 Mattia Cerrato, Marius Köppel, Roberto Esposito, Stefan Kramer

In this paper, we propose a methodology for direct computation of the mutual information between a neural layer and a sensitive attribute.

Attribute Representation Learning

Fair Interpretable Representation Learning with Correction Vectors

no code implementations7 Feb 2022 Mattia Cerrato, Alesia Vallenas Coronel, Marius Köppel, Alexander Segner, Roberto Esposito, Stefan Kramer

Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information.

Representation Learning

Fair Interpretable Learning via Correction Vectors

no code implementations17 Jan 2022 Mattia Cerrato, Marius Köppel, Alexander Segner, Stefan Kramer

Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information.

Representation Learning

Fair Group-Shared Representations with Normalizing Flows

no code implementations17 Jan 2022 Mattia Cerrato, Marius Köppel, Alexander Segner, Stefan Kramer

In this context, one of the possible approaches is to employ fair representation learning algorithms which are able to remove biases from data, making groups statistically indistinguishable.

Attribute Fairness +1

Fast Private Parameter Learning and Inference for Sum-Product Networks

no code implementations15 Apr 2021 Ernst Althaus, Mohammad Sadeq Dousti, Stefan Kramer, Nick Johannes Peter Rassau

A sum-product network (SPN) is a graphical model that allows several types of inferences to be drawn efficiently.

Pattern Sampling for Shapelet-based Time Series Classification

no code implementations16 Feb 2021 Atif Raza, Stefan Kramer

The asymptotic time complexity of subsequence-based algorithms remains a higher-order polynomial, because these algorithms are based on exhaustive search for highly discriminative subsequences.

Classification General Classification +3

Focusing Knowledge-based Graph Argument Mining via Topic Modeling

no code implementations3 Feb 2021 Patrick Abels, Zahra Ahmadi, Sophie Burkhardt, Benjamin Schiller, Iryna Gurevych, Stefan Kramer

We use a topic model to extract topic- and sentence-specific evidence from the structured knowledge base Wikidata, building a graph based on the cosine similarity between the entity word vectors of Wikidata and the vector of the given sentence.

Argument Mining Decision Making +3

Deep Neural Networks to Recover Unknown Physical Parameters from Oscillating Time Series

no code implementations11 Jan 2021 Antoine Garcon, Julian Vexler, Dmitry Budker, Stefan Kramer

We show that, because DNNs can find useful abstract feature representations, they can be used when prior knowledge about the signal-generating process exists, but is not complete, as it is particularly the case in "new-physics" searches.

Denoising regression +2

Deep Unsupervised Identification of Selected SNPs between Adapted Populations on Pool-seq Data

no code implementations28 Dec 2020 Julia Siekiera, Stefan Kramer

The exploration of selected single nucleotide polymorphisms (SNPs) to identify genetic diversity between different sequencing population pools (Pool-seq) is a fundamental task in genetic research.

Diversity Explainable artificial intelligence

Ranking Creative Language Characteristics in Small Data Scenarios

no code implementations23 Oct 2020 Julia Siekiera, Marius Köppel, Edwin Simpson, Kevin Stowe, Iryna Gurevych, Stefan Kramer

We therefore adapt the DirectRanker to provide a new deep model for ranking creative language with small data.

Secure Sum Outperforms Homomorphic Encryption in (Current) Collaborative Deep Learning

no code implementations2 Jun 2020 Derian Boer, Stefan Kramer

Deep learning (DL) approaches are achieving extraordinary results in a wide range of domains, but often require a massive collection of private data.

Federated Learning Privacy Preserving

Towards Probability-based Safety Verification of Systems with Components from Machine Learning

no code implementations2 Mar 2020 Hermann Kaindl, Stefan Kramer

As a result, the quantitatively determined bound on the probability of a classification error of an ML component in a safety-critical system contributes in a well-defined way to the latter's overall safety verification.

BIG-bench Machine Learning

Online Multi-Label Classification: A Label Compression Method

no code implementations4 Apr 2018 Zahra Ahmadi, Stefan Kramer

Many modern applications deal with multi-label data, such as functional categorizations of genes, image labeling and text categorization.

Classification Dimensionality Reduction +3

Ensembles of Randomized Time Series Shapelets Provide Improved Accuracy while Reducing Computational Costs

1 code implementation22 Feb 2017 Atif Raza, Stefan Kramer

Using random sampling reduces the number of evaluated candidates and consequently the required computational cost, while the classification accuracy of the resulting models is also not significantly different than that of the exact algorithm.

Classification Diversity +3

Graph Clustering with Density-Cut

no code implementations3 Jun 2016 Junming Shao, Qinli Yang, Jinhu Liu, Stefan Kramer

We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective.

Social and Information Networks Physics and Society

Cannot find the paper you are looking for? You can Submit a new open access paper.