no code implementations • 4 Apr 2024 • Michael Sucker, Jalal Fadili, Peter Ochs
We use the PAC-Bayesian theory for the setting of learning-to-optimize.
1 code implementation • 16 Nov 2023 • Severin Maier, Camille Castera, Peter Ochs
We introduce an autonomous system with closed-loop damping for first-order convex optimization.
no code implementations • 20 Oct 2022 • Michael Sucker, Peter Ochs
We apply the PAC-Bayes theory to the setting of learning-to-optimize.
no code implementations • 5 Aug 2022 • Sheheryar Mehmood, Peter Ochs
A large class of non-smooth practical optimization problems can be written as minimization of a sum of smooth and partly smooth functions.
no code implementations • 24 Dec 2020 • Mahesh Chandra Mukkamala, Jalal Fadili, Peter Ochs
We fix this issue by proposing the MAP property, which generalizes the $L$-smad property and is also valid for a large class of nonconvex nonsmooth composite problems.
no code implementations • 18 Aug 2020 • Amirhossein Kardoost, Kalun Ho, Peter Ochs, Margret Keuper
We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS16.
no code implementations • 8 Oct 2019 • Mahesh Chandra Mukkamala, Felix Westerkamp, Emanuel Laude, Daniel Cremers, Peter Ochs
This initiated the development of the Bregman proximal gradient (BPG) algorithm and an inertial variant (momentum based) CoCaIn BPG, which however rely on problem dependent Bregman distances.
2 code implementations • NeurIPS 2019 • Mahesh Chandra Mukkamala, Peter Ochs
Matrix Factorization is a popular non-convex optimization problem, for which alternating minimization schemes are mostly used.
2 code implementations • 6 Apr 2019 • Mahesh Chandra Mukkamala, Peter Ochs, Thomas Pock, Shoham Sabach
Backtracking line-search is an old yet powerful strategy for finding a better step sizes to be used in proximal gradient algorithms.
no code implementations • 23 Jan 2019 • Yura Malitsky, Peter Ochs
The Conditional Gradient Method is generalized to a class of non-smooth non-convex optimization problems with many applications in machine learning.
1 code implementation • ECCV 2018 • Peter Ochs, Tim Meinhardt, Laura Leal-Taixe, Michael Moeller
A lifting layer increases the dimensionality of the input, naturally yields a linear spline when combined with a fully connected layer, and therefore closes the gap between low and high dimensional approximation problems.
no code implementations • 18 Apr 2014 • Peter Ochs, Yunjin Chen, Thomas Brox, Thomas Pock
A rigorous analysis of the algorithm for the proposed class of problems yields global convergence of the function values and the arguments.
no code implementations • CVPR 2013 • Peter Ochs, Alexey Dosovitskiy, Thomas Brox, Thomas Pock
Here we extend the problem class to linearly constrained optimization of a Lipschitz continuous function, which is the sum of a convex function and a function being concave and increasing on the non-negative orthant (possibly non-convex and nonconcave on the whole space).