Search Results for author: Jakob Heiss

Found 8 papers, 6 papers with code

Machine Learning-Powered Combinatorial Clock Auction

1 code implementation20 Aug 2023 Ermis Soumalias, Jakob Weissteiner, Jakob Heiss, Sven Seuken

In this paper, we address this shortcoming by designing an ML-powered combinatorial clock auction that elicits information from the bidders only via demand queries (i. e., ``At prices $p$, what is your most preferred bundle of items?'').

Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework

1 code implementation24 Jul 2023 William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann

The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations.

Time Series

How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer

no code implementations20 Mar 2023 Jakob Heiss, Josef Teichmann, Hanna Wutte

Randomized neural networks (randomized NNs), where only the terminal layer's weights are optimized constitute a powerful model class to reduce computational time in training the neural network model.

regression

Bayesian Optimization-based Combinatorial Assignment

1 code implementation31 Aug 2022 Jakob Weissteiner, Jakob Heiss, Julien Siems, Sven Seuken

In this paper, we address this shortcoming by presenting a Bayesian optimization-based combinatorial assignment (BOCA) mechanism.

Bayesian Optimization

How Infinitely Wide Neural Networks Can Benefit from Multi-task Learning -- an Exact Macroscopic Characterization

1 code implementation31 Dec 2021 Jakob Heiss, Josef Teichmann, Hanna Wutte

In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs).

Gaussian Processes L2 Regularization +2

NOMU: Neural Optimization-based Model Uncertainty

1 code implementation26 Feb 2021 Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann

To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data.

Bayesian Optimization regression

Reducing the number of neurons of Deep ReLU Networks based on the current theory of Regularization

no code implementations1 Jan 2021 Jakob Heiss, Alexis Stockinger, Josef Teichmann

We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network.

How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer

1 code implementation7 Nov 2019 Jakob Heiss, Josef Teichmann, Hanna Wutte

In this paper, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.