no code implementations • 28 Mar 2024 • Johannes Müller, Semih Çaycı, Guido Montúfar
Kakade's natural policy gradient method has been studied extensively in the last years showing linear convergence with and without regularization.
1 code implementation • 25 Feb 2023 • Johannes Müller, Marius Zeinhofer
We propose energy natural gradient descent, a natural gradient method with respect to a Hessian-induced Riemannian metric as an optimization algorithm for physics-informed neural networks (PINNs) and the deep Ritz method.
no code implementations • 19 Dec 2022 • Sona John, Johannes Müller
These time scales, which seem to form a universal structure in the interplay of weak selection and life-history traits, allow us to reduce the infinite dimensional model to a one-dimensional modified replicator equation.
no code implementations • 3 Nov 2022 • Johannes Müller, Guido Montúfar
We study the convergence of several natural policy gradient (NPG) methods in infinite-horizon discounted Markov decision processes with regular policy parametrizations.
1 code implementation • 8 Aug 2022 • Alexander Tsaregorodtsev, Johannes Müller, Jan Strohbeck, Martin Herrmann, Michael Buchholz, Vasileios Belagiannis
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle with high-precision localization to capture a point cloud of the camera environment.
no code implementations • 30 Jun 2022 • Jesse van Oostrum, Johannes Müller, Nihat Ay
The natural gradient field is a vector field that lives on a model equipped with a distinguished Riemannian metric, e. g. the Fisher-Rao metric, and represents the direction of steepest ascent of an objective function on the model with respect to this metric.
no code implementations • 15 Jun 2022 • Thomas Griebel, Johannes Müller, Paul Geisler, Charlotte Hermann, Martin Herrmann, Michael Buchholz, Klaus Dietmayer
Therefore, this work presents a novel method for self-assessment of single-object tracking in clutter based on Kalman filtering and subjective logic.
1 code implementation • 27 May 2022 • Johannes Müller, Guido Montúfar
Reward optimization in fully observable Markov decision processes is equivalent to a linear program over the polytope of state-action frequencies.
no code implementations • 29 Mar 2022 • Marcel Schloz, Johannes Müller, Thomas C. Pekin, Wouter Van den Broek, Christoph T. Koch
We present a method that lowers the dose required for a ptychographic reconstruction by adaptively scanning the specimen, thereby providing the required spatial information redundancy in the regions of highest importance.
no code implementations • 19 Dec 2021 • Johannes Müller, Aurelien Tellier, Michael Kurschilgen
Vaccination hesitancy is a major obstacle to achieving and maintaining herd immunity.
no code implementations • 13 Nov 2021 • Johannes Müller, Aurélien Tellier
In this context, it is fundamentally of interest to generalize the replicator equation, which is at the heart of most population genomics models.
2 code implementations • ICLR 2022 • Johannes Müller, Guido Montúfar
We then describe the optimization problem as a linear optimization problem in the space of feasible state-action frequencies subject to polynomial constraints that we characterize explicitly.
no code implementations • 1 Mar 2021 • Johannes Müller, Marius Zeinhofer
Our results apply to arbitrary sets of ansatz functions and estimate the error in dependence of the optimization accuracy, the approximation capabilities of the ansatz class and -- in the case of Dirichlet boundary values -- the penalization strength $\lambda$.
no code implementations • 10 Oct 2020 • Johannes Müller, Volker Hösel
We investigate a novel model for super-spreader events, not based on a heterogeneous contact graph but on a random contact rate: Many individuals become infected synchronously in single contact events.
no code implementations • 1 Jul 2020 • Thomas Griebel, Johannes Müller, Michael Buchholz, Klaus Dietmayer
Thus, by embedding classical Kalman filtering into subjective logic, our method additionally features an explicit measure for statistical uncertainty in the self-assessment.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Johannes Müller, Marius Zeinhofer
In this notes we use the notion of $\Gamma$-convergence to show that ReLU networks of growing architecture that are trained with respect to suitably regularised Dirichlet energies converge to the true solution of the Poisson problem.
no code implementations • 5 Nov 2019 • Johannes Müller, Martin Herrmann, Jan Strohbeck, Vasileios Belagiannis, Michael Buchholz
While classical approaches are sensor-specific and often need calibration targets as well as a widely overlapping field of view (FOV), within this work, a cooperative intelligent vehicle is used as callibration target.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Johannes Müller
This structure can be seen as the Euler discretisation of an associated ordinary differential equation (ODE) which is called a neural ODE.
2 code implementations • 29 Oct 2018 • Feng Wang, Alberto Eljarrat, Johannes Müller, Trond Henninen, Erni Rolf, Christoph Koch
We propose a novel neural network architecture highlighting fast convergence as a generic solution addressing image(s)-to-image(s) inverse problems of different domains.
Computational Physics Materials Science
no code implementations • 6 Jan 2014 • Noreen Jamil, Johannes Müller, Christof Lutteroth, Gerald Weber
Constraints are a powerful tool for specifying adaptable GUI layouts: they are used to specify a layout in a general form, and a constraint solver is used to find a satisfying concrete layout, e. g.\ for a specific GUI size.