no code implementations • 9 Feb 2024 • Vijaya Krishna Yalavarthi, Randolf Scholz, Stefan Born, Lars Schmidt-Thieme
Probabilistic forecasting of irregularly sampled multivariate time series with missing values is an important problem in many fields, including health care, astronomy, and climate.
no code implementations • 2 Sep 2022 • Nghia Duong-Trung, Stefan Born, Jong Woo Kim, Marie-Therese Schermeyer, Katharina Paulick, Maxim Borisyak, Mariano Nicolas Cruz-Bournazou, Thorben Werner, Randolf Scholz, Lars Schmidt-Thieme, Peter Neubauer, Ernesto Martinez
ML can be seen as a set of tools that contribute to the automation of the whole experimental cycle, including model building and practical planning, thus allowing human experts to focus on the more demanding and overarching cognitive tasks.
no code implementations • 3 Sep 2021 • Raaghav Radhakrishnan, Jan Fabian Schmid, Randolf Scholz, Lars Schmidt-Thieme
Ground texture based localization methods are potential prospects for low-cost, high-accuracy self-localization solutions for robots.
no code implementations • 15 Jun 2021 • Mesay Samuel Gondere, Lars Schmidt-Thieme, Durga Prasad Sharma, Randolf Scholz
Therefore, in this study multi-script handwritten digit recognition using multi-task learning will be investigated.
1 code implementation • 30 Jul 2020 • Sebastian Pineda-Arango, David Obando-Paniagua, Alperen Dedeoglu, Philip Kurzendörfer, Friedemann Schestag, Randolf Scholz
Experiments on CIFAR-10 and CIFAR-100 show that networks with normalized kernels as output layer can achieve higher sample efficiency, high compactness and well-separability through the presented method in comparison to networks with SoftMax output layer.
1 code implementation • 30 Sep 2019 • Lukas Brinkmeyer, Rafael Rego Drumond, Randolf Scholz, Josif Grabocka, Lars Schmidt-Thieme
Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization.
no code implementations • 24 May 2019 • Josif Grabocka, Randolf Scholz, Lars Schmidt-Thieme
Ultimately, the surrogate losses are learned jointly with the prediction model via bilevel optimization.