no code implementations • 4 Jan 2023 • Matthias Mitterreiter, Marcel Koch, Joachim Giesen, Sören Laue
The capsule neural network (CapsNet), by Sabour, Frosst, and Hinton, is the first actual implementation of the conceptual idea of capsule neural networks.
no code implementations • 19 Oct 2022 • Julien Klaus, Niklas Merk, Konstantin Wiedom, Sören Laue, Joachim Giesen
The Hessian of a differentiable convex function is positive semidefinite.
1 code implementation • 12 May 2022 • Mark Blacher, Joachim Giesen, Peter Sanders, Jan Wassenberg
Recent works showed that implementations of Quicksort using vector CPU instructions can outperform the non-vectorized algorithms in widespread use.
1 code implementation • 30 Mar 2022 • Sören Laue, Mark Blacher, Joachim Giesen
Here, we extend the GENO framework to also solve constrained optimization problems on the GPU.
no code implementations • 7 Oct 2020 • Sören Laue, Matthias Mitterreiter, Joachim Giesen
This leaves two options, to either change the underlying tensor representation in these frameworks or to develop a new, provably correct algorithm based on Einstein notation.
no code implementations • NeurIPS 2019 • Sören Laue, Matthias Mitterreiter, Joachim Giesen
The framework is flexible enough to encompass most of the classical machine learning problems.
no code implementations • 28 Jan 2019 • Frank Nussbaum, Joachim Giesen
In the case of only a few latent conditional Gaussian variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model.
no code implementations • NeurIPS 2018 • Soeren Laue, Matthias Mitterreiter, Joachim Giesen
Optimization is an integral part of most machine learning systems and most numerical optimization schemes rely on the computation of derivatives.
no code implementations • 7 Oct 2016 • Joachim Giesen, Sören Laue
We address the problem of solving convex optimization problems with many convex constraints in a distributed setting.
no code implementations • NeurIPS 2012 • Joachim Giesen, Jens Mueller, Soeren Laue, Sascha Swiercy
We consider an abstract class of optimization problems that are parameterized concavely in a single parameter, and show that the solution path along the parameter can always be approximated with accuracy $\varepsilon >0$ by a set of size $O(1/\sqrt{\varepsilon})$.