Search Results for author: Joachim Giesen

Found 10 papers, 2 papers with code

Why Capsule Neural Networks Do Not Scale: Challenging the Dynamic Parse-Tree Assumption

no code implementations4 Jan 2023 Matthias Mitterreiter, Marcel Koch, Joachim Giesen, Sören Laue

The capsule neural network (CapsNet), by Sabour, Frosst, and Hinton, is the first actual implementation of the conceptual idea of capsule neural networks.

Convexity Certificates from Hessians

no code implementations19 Oct 2022 Julien Klaus, Niklas Merk, Konstantin Wiedom, Sören Laue, Joachim Giesen

The Hessian of a differentiable convex function is positive semidefinite.

Vectorized and performance-portable Quicksort

1 code implementation12 May 2022 Mark Blacher, Joachim Giesen, Peter Sanders, Jan Wassenberg

Recent works showed that implementations of Quicksort using vector CPU instructions can outperform the non-vectorized algorithms in widespread use.

Optimization for Classical Machine Learning Problems on the GPU

1 code implementation30 Mar 2022 Sören Laue, Mark Blacher, Joachim Giesen

Here, we extend the GENO framework to also solve constrained optimization problems on the GPU.

BIG-bench Machine Learning

A Simple and Efficient Tensor Calculus for Machine Learning

no code implementations7 Oct 2020 Sören Laue, Matthias Mitterreiter, Joachim Giesen

This leaves two options, to either change the underlying tensor representation in these frameworks or to develop a new, provably correct algorithm based on Einstein notation.

BIG-bench Machine Learning

Ising Models with Latent Conditional Gaussian Variables

no code implementations28 Jan 2019 Frank Nussbaum, Joachim Giesen

In the case of only a few latent conditional Gaussian variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model.

Computing Higher Order Derivatives of Matrix and Tensor Expressions

no code implementations NeurIPS 2018 Soeren Laue, Matthias Mitterreiter, Joachim Giesen

Optimization is an integral part of most machine learning systems and most numerical optimization schemes rely on the computation of derivatives.

BIG-bench Machine Learning

Distributed Convex Optimization with Many Convex Constraints

no code implementations7 Oct 2016 Joachim Giesen, Sören Laue

We address the problem of solving convex optimization problems with many convex constraints in a distributed setting.

Approximating Concavely Parameterized Optimization Problems

no code implementations NeurIPS 2012 Joachim Giesen, Jens Mueller, Soeren Laue, Sascha Swiercy

We consider an abstract class of optimization problems that are parameterized concavely in a single parameter, and show that the solution path along the parameter can always be approximated with accuracy $\varepsilon >0$ by a set of size $O(1/\sqrt{\varepsilon})$.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.