no code implementations • 28 Dec 2023 • Michael Kohler, Adam Krzyzak
One of the most recent and fascinating breakthroughs in artificial intelligence is ChatGPT, a chatbot which can simulate human conversation.
no code implementations • 24 Nov 2023 • Selina Drews, Michael Kohler
Recent results show that estimates defined by over-parametrized deep neural networks learned by applying gradient descent to a regularized empirical $L_2$ risk are universally consistent and achieve good rates of convergence.
no code implementations • 11 May 2022 • Michael Kohler, Benjamin Walter
Convolutional neural network image classifiers are defined and the rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed.
no code implementations • 29 Nov 2021 • Iryna Gurevych, Michael Kohler, Gözde Gül Sahin
Pattern recognition based on a high-dimensional predictor is considered.
no code implementations • 20 Jul 2021 • Michael Kohler, Sophie Langer, Ulrich Reif
Estimation of a regression function from independent and identically distributed data is considered.
no code implementations • 17 Dec 2020 • Sebastian Kersting, Michael Kohler
Uncertainty quantification of complex technical systems is often based on a computer model of the system.
no code implementations • 31 Oct 2020 • Michael Kohler, Adam Krzyzak
A regression problem with dependent data is considered.
2 code implementations • LREC 2020 • Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, Gregor Weber
To our knowledge this is the largest audio corpus in the public domain for speech recognition, both in terms of number of hours and number of languages.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 29 Aug 2019 • Michael Kohler, Sophie Langer
Recent results in nonparametric regression show that deep learning, i. e., neural network estimates with many hidden layers, are able to circumvent the so-called curse of dimensionality in case that suitable restrictions on the structure of the regression function hold.
no code implementations • 29 Aug 2019 • Michael Kohler, Adam Krzyzak, Sophie Langer
Consequently, the rate of convergence of the estimate does not depend on its input dimension $d$, but on its local dimension $d^*$ and the DNNs are able to circumvent the curse of dimensionality in case that $d^*$ is much smaller than $d$.