1 code implementation • 20 Aug 2023 • Ermis Soumalias, Jakob Weissteiner, Jakob Heiss, Sven Seuken
In this paper, we address this shortcoming by designing an ML-powered combinatorial clock auction that elicits information from the bidders only via demand queries (i. e., ``At prices $p$, what is your most preferred bundle of items?'').
1 code implementation • 24 Jul 2023 • William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann
The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations.
no code implementations • 20 Mar 2023 • Jakob Heiss, Josef Teichmann, Hanna Wutte
Randomized neural networks (randomized NNs), where only the terminal layer's weights are optimized constitute a powerful model class to reduce computational time in training the neural network model.
1 code implementation • 31 Aug 2022 • Jakob Weissteiner, Jakob Heiss, Julien Siems, Sven Seuken
In this paper, we address this shortcoming by presenting a Bayesian optimization-based combinatorial assignment (BOCA) mechanism.
1 code implementation • 31 Dec 2021 • Jakob Heiss, Josef Teichmann, Hanna Wutte
In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs).
1 code implementation • 26 Feb 2021 • Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann
To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data.
no code implementations • 1 Jan 2021 • Jakob Heiss, Alexis Stockinger, Josef Teichmann
We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network.
1 code implementation • 7 Nov 2019 • Jakob Heiss, Josef Teichmann, Hanna Wutte
In this paper, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained.