Search Results for author: Randolf Scholz

Found 7 papers, 2 papers with code

Probabilistic Forecasting of Irregular Time Series via Conditional Flows

no code implementations9 Feb 2024 Vijaya Krishna Yalavarthi, Randolf Scholz, Stefan Born, Lars Schmidt-Thieme

Probabilistic forecasting of irregularly sampled multivariate time series with missing values is an important problem in many fields, including health care, astronomy, and climate.

Astronomy Irregular Time Series +1

When Bioprocess Engineering Meets Machine Learning: A Survey from the Perspective of Automated Bioprocess Development

no code implementations2 Sep 2022 Nghia Duong-Trung, Stefan Born, Jong Woo Kim, Marie-Therese Schermeyer, Katharina Paulick, Maxim Borisyak, Mariano Nicolas Cruz-Bournazou, Thorben Werner, Randolf Scholz, Lars Schmidt-Thieme, Peter Neubauer, Ernesto Martinez

ML can be seen as a set of tools that contribute to the automation of the whole experimental cycle, including model building and practical planning, thus allowing human experts to focus on the more demanding and overarching cognitive tasks.

Model Selection Probabilistic Programming

Deep Metric Learning for Ground Images

no code implementations3 Sep 2021 Raaghav Radhakrishnan, Jan Fabian Schmid, Randolf Scholz, Lars Schmidt-Thieme

Ground texture based localization methods are potential prospects for low-cost, high-accuracy self-localization solutions for robots.

Image Retrieval Metric Learning +1

Improving Sample Efficiency with Normalized RBF Kernels

1 code implementation30 Jul 2020 Sebastian Pineda-Arango, David Obando-Paniagua, Alperen Dedeoglu, Philip Kurzendörfer, Friedemann Schestag, Randolf Scholz

Experiments on CIFAR-10 and CIFAR-100 show that networks with normalized kernels as output layer can achieve higher sample efficiency, high compactness and well-separability through the presented method in comparison to networks with SoftMax output layer.

Chameleon: Learning Model Initializations Across Tasks With Different Schemas

1 code implementation30 Sep 2019 Lukas Brinkmeyer, Rafael Rego Drumond, Randolf Scholz, Josif Grabocka, Lars Schmidt-Thieme

Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization.

Meta-Learning

Learning Surrogate Losses

no code implementations24 May 2019 Josif Grabocka, Randolf Scholz, Lars Schmidt-Thieme

Ultimately, the surrogate losses are learned jointly with the prediction model via bilevel optimization.

Bilevel Optimization General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.