no code implementations • ICML 2020 • Sebastian Pokutta, Marc Pfetsch

Recently non-convex optimization approaches for solving machine learning problems have gained significant attention.

1 code implementation • 3 Jun 2024 • Jan Pauls, Max Zimmer, Una M. Kelly, Martin Schwartz, Sassan Saatchi, Philippe Ciais, Sebastian Pokutta, Martin Brandt, Fabian Gieseke

We propose a framework for global-scale canopy height estimation based on satellite data.

no code implementations • 19 Mar 2024 • Konrad Mundinger, Max Zimmer, Sebastian Pokutta

We introduce Neural Parameter Regression (NPR), a novel framework specifically developed for learning solution operators in Partial Differential Equations (PDEs).

1 code implementation • 19 Feb 2024 • Christophe Roux, Max Zimmer, Sebastian Pokutta

In this work, we study the performance of such approaches in the byzantine setting, where a subset of the clients act in an adversarial manner aiming to disrupt the learning process.

no code implementations • 23 Dec 2023 • Max Zimmer, Megi Andoni, Christoph Spiegel, Sebastian Pokutta

Neural Networks can be efficiently compressed through pruning, significantly reducing storage and computational demands while maintaining predictive performance.

1 code implementation • 29 Nov 2023 • Shpresim Sadiku, Moritz Wagner, Sebastian Pokutta

However, crafting such attacks poses an optimization challenge, as it involves computing norms for groups of pixels within a non-convex objective.

1 code implementation • 29 Jun 2023 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta

Model soups (Wortsman et al., 2022) enhance generalization and out-of-distribution (OOD) performance by averaging the parameters of multiple models into a single one, without increasing inference time.

no code implementations • 25 May 2023 • David Martínez-Rubio, Christophe Roux, Christopher Criscitiello, Sebastian Pokutta

In this work, we study optimization problems of the form $\min_x \max_y f(x, y)$, where $f(x, y)$ is defined on a product Riemannian manifold $\mathcal{M} \times \mathcal{N}$ and is $\mu_x$-strongly geodesically convex (g-convex) in $x$ and $\mu_y$-strongly g-concave in $y$, for $\mu_x, \mu_y \geq 0$.

no code implementations • 4 Apr 2023 • Antonia Chmiela, Ambros Gleixner, Pawel Lichocki, Sebastian Pokutta

In this work, we propose an online learning approach that adapts the application of heuristics towards the single instance at hand.

no code implementations • 26 Nov 2022 • David Martínez-Rubio, Sebastian Pokutta

For smooth functions, we show we can implement the prox step inexactly with first-order methods in Riemannian balls of certain diameter that is enough for global accelerated optimization.

1 code implementation • 23 Aug 2022 • Deborah Hendrych, Hannah Troppens, Mathieu Besançon, Sebastian Pokutta

These relaxations are solved with a Frank-Wolfe algorithm over the convex hull of mixed-integer feasible points instead of the continuous relaxation via calls to a mixed-integer linear solver as the linear minimization oracle.

1 code implementation • 4 Jul 2022 • Elias Wirth, Hiroshi Kera, Sebastian Pokutta

The vanishing ideal of a set of points $X = \{\mathbf{x}_1, \ldots, \mathbf{x}_m\}\subseteq \mathbb{R}^n$ is the set of polynomials that evaluate to $0$ over all points $\mathbf{x} \in X$ and admits an efficient representation by a finite subset of generators.

1 code implementation • 1 Jun 2022 • Stephan Wäldchen, Kartikey Sharma, Berkant Turan, Max Zimmer, Sebastian Pokutta

We propose an interactive multi-agent classifier that provides provable interpretability guarantees even for complex agents such as neural networks.

1 code implementation • 24 May 2022 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta

Many existing Neural Network pruning approaches rely on either retraining or inducing a strong bias in order to converge to a sparse solution throughout training.

2 code implementations • 4 Mar 2022 • Maxime Gasse, Quentin Cappart, Jonas Charfreitag, Laurent Charlin, Didier Chételat, Antonia Chmiela, Justin Dumouchelle, Ambros Gleixner, Aleksandr M. Kazachkov, Elias Khalil, Pawel Lichocki, Andrea Lodi, Miles Lubin, Chris J. Maddison, Christopher Morris, Dimitri J. Papageorgiou, Augustin Parjadis, Sebastian Pokutta, Antoine Prouvost, Lara Scavuzzo, Giulia Zarpellon, Linxin Yang, Sha Lai, Akang Wang, Xiaodong Luo, Xiang Zhou, Haohan Huang, Shengcheng Shao, Yuanming Zhu, Dong Zhang, Tao Quan, Zixuan Cao, Yang Xu, Zhewei Huang, Shuchang Zhou, Chen Binbin, He Minggui, Hao Hao, Zhang Zhiyu, An Zhiwu, Mao Kun

Combinatorial optimization is a well-established area in operations research and computer science.

no code implementations • 23 Feb 2022 • Stephan Wäldchen, Felix Huber, Sebastian Pokutta

Given only a standard classifier function, it is unclear how partial input should be realised.

1 code implementation • 7 Feb 2022 • Elias Wirth, Sebastian Pokutta

To accommodate the noise in the data set, we introduce the Conditional Gradients Approximately Vanishing Ideal algorithm (CGAVI) for the construction of the set of generators of the approximately vanishing ideal.

1 code implementation • 1 Nov 2021 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta

Many Neural Network Pruning approaches consist of several iterative training and pruning steps, seemingly losing a significant amount of their performance after pruning and then recovering it in the subsequent retraining phase.

1 code implementation • 15 Oct 2021 • Jan Macdonald, Mathieu Besançon, Sebastian Pokutta

We study the effects of constrained optimization formulations and Frank-Wolfe algorithms for obtaining interpretable neural network predictions.

1 code implementation • NeurIPS 2021 • Alejandro Carderera, Mathieu Besançon, Sebastian Pokutta

Generalized self-concordance is a key property present in the objective function of many important learning problems.

no code implementations • 28 May 2021 • Christophe Roux, Elias Wirth, Sebastian Pokutta, Thomas Kerdreux

Several learning problems involve solving min-max problems, e. g., empirical distributional robust learning or learning with non-standard aggregated losses.

1 code implementation • NeurIPS 2021 • Antonia Chmiela, Elias Boutros Khalil, Ambros Gleixner, Andrea Lodi, Sebastian Pokutta

Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on two classes of challenging instances.

1 code implementation • NeurIPS 2021 • Antonia Chmiela, Elias B. Khalil, Ambros Gleixner, Andrea Lodi, Sebastian Pokutta

In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver.

no code implementations • 10 Mar 2021 • Thomas Kerdreux, Christophe Roux, Alexandre d'Aspremont, Sebastian Pokutta

Linear bandit algorithms yield $\tilde{\mathcal{O}}(n\sqrt{T})$ pseudo-regret bounds on compact convex action sets $\mathcal{K}\subset\mathbb{R}^n$ and two types of structural assumptions lead to better pseudo-regret bounds.

no code implementations • 12 Feb 2021 • Alejandro Carderera, Jelena Diakonikolas, Cheuk Yin Lin, Sebastian Pokutta

Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible.

no code implementations • 9 Feb 2021 • Thomas Kerdreux, Alexandre d'Aspremont, Sebastian Pokutta

We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces and connect results stemming from the geometry of Banach spaces with \textit{scaling inequalities} used in analysing the convergence of optimization methods.

no code implementations • 27 Jan 2021 • Sebastian Pokutta, Huan Xu

We revisit the concept of "adversary" in online learning, motivated by solving robust optimization and adversarial training using online learning methods.

1 code implementation • 25 Jan 2021 • Cyrille W. Combettes, Sebastian Pokutta

The Frank-Wolfe algorithm is a method for constrained optimization that relies on linear minimizations, as opposed to projections.

1 code implementation • 7 Jan 2021 • Alejandro Carderera, Sebastian Pokutta, Christof Schütte, Martin Weiser

Governing equations are essential to the study of nonlinear dynamics, often enabling the prediction of previously unseen behaviors as well as the inclusion into control strategies.

Dynamical Systems Applications

no code implementations • 6 Jan 2021 • Gábor Braun, Sebastian Pokutta

In this note we observe that for constrained convex minimization problems $\min_{x \in P}f(x)$ over a polytope $P$, dual prices for the linear program $\min_{z \in P} \nabla f(x) z$ obtained from linearization at approximately optimal solutions $x$ have a similar interpretation of rate of change in optimal value as for linear programming, providing a convex form of sensitivity analysis.

Optimization and Control

1 code implementation • 14 Oct 2020 • Sebastian Pokutta, Christoph Spiegel, Max Zimmer

In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants.

1 code implementation • 29 Sep 2020 • Cyrille W. Combettes, Christoph Spiegel, Sebastian Pokutta

The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set.

1 code implementation • NeurIPS 2020 • Hassan Mortagy, Swati Gupta, Sebastian Pokutta

We combine these insights into a novel Shadow-CG method that uses FW and shadow steps, while enjoying linear convergence, with a rate that depends on the number of breakpoints in its projection curve, rather than the pyramidal width.

1 code implementation • ICML 2020 • Cyrille W. Combettes, Sebastian Pokutta

The Frank-Wolfe algorithm has become a popular first-order optimization algorithm for it is simple and projection-free, and it has been successfully applied to a variety of real-world problems.

1 code implementation • 20 Feb 2020 • Alejandro Carderera, Sebastian Pokutta

Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence.

no code implementations • 11 Feb 2020 • Marc E. Pfetsch, Sebastian Pokutta

Recently non-convex optimization approaches for solving machine learning problems have gained significant attention.

1 code implementation • 11 Nov 2019 • Cyrille W. Combettes, Sebastian Pokutta

The approximate Carath\'eodory theorem states that given a compact convex set $\mathcal{C}\subset\mathbb{R}^n$ and $p\in\left[2,+\infty\right[$, each point $x^*\in\mathcal{C}$ can be approximated to $\epsilon$-accuracy in the $\ell_p$-norm as the convex combination of $\mathcal{O}(pD_p^2/\epsilon^2)$ vertices of $\mathcal{C}$, where $D_p$ is the diameter of $\mathcal{C}$ in the $\ell_p$-norm.

no code implementations • 25 Sep 2019 • Christopher Mutschler, Sebastian Pokutta

This generates pairs of state encodings, i. e., a new representation from the environment and a (biased) old representation from the forward model, that allow us to bootstrap a neural network model for state translation.

no code implementations • 19 Jun 2019 • Jelena Diakonikolas, Alejandro Carderera, Sebastian Pokutta

As such, they are frequently used in solving smooth convex optimization problems over polytopes, for which the computational cost of orthogonal projections would be prohibitive.

no code implementations • NeurIPS 2019 • Cyrille W. Combettes, Sebastian Pokutta

Matching pursuit algorithms are an important class of algorithms in signal processing and machine learning.

no code implementations • 30 Oct 2018 • Andreas Bärmann, Alexander Martin, Sebastian Pokutta, Oskar Schneider

We also introduce several generalizations, such as the approximate learning of non-linear objective functions, dynamically changing as well as parameterized objectives and the case of suboptimal observed decisions.

no code implementations • ICLR 2019 • Daniel Bienstock, Gonzalo Muñoz, Sebastian Pokutta

Our results provide a new perspective on training problems through the lens of polyhedral theory and reveal a strong structure arising from these problems.

no code implementations • 25 Jul 2018 • Sebastian Pokutta, Mohit Singh, Alfredo Torrico

In this work, we consider robust submodular maximization with matroid constraints.

2 code implementations • 18 May 2018 • Gábor Braun, Sebastian Pokutta, Dan Tu, Stephen Wright

We present a blended conditional gradient approach for minimizing a smooth convex function over a polytope P, combining the Frank--Wolfe algorithm (also called conditional gradient) with gradient-based steps, different from away steps and pairwise steps, but still achieving linear convergence for strongly convex functions, along with good practical performance.

no code implementations • ICML 2017 • Andreas Bärmann, Sebastian Pokutta, Oskar Schneider

In this paper, we demonstrate how to learn the objective function of a decision maker while only observing the problem input data and the decision maker’s corresponding decisions over multiple rounds.

no code implementations • NeurIPS 2017 • Aurko Roy, Huan Xu, Sebastian Pokutta

We study reinforcement learning under model misspecification, where we do not have access to the true environment but only to a reasonably close approximation to it.

no code implementations • ICML 2017 • Guanghui Lan, Sebastian Pokutta, Yi Zhou, Daniel Zink

In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O\left(\frac{1}{\varepsilon^2}\right)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of Hazan and Kale [2012] with convergence rate $O\left(\frac{1}{\varepsilon^4}\right)$.

no code implementations • NeurIPS 2016 • Aurko Roy, Sebastian Pokutta

We also prove that our algorithm returns an $O(\log{n})$-approximate hierarchical clustering for a generalization of this cost function also studied in [arXiv:1510. 05043].

no code implementations • ICML 2017 • Gábor Braun, Sebastian Pokutta, Daniel Zink

Conditional gradient algorithms (also often called Frank-Wolfe algorithms) are popular due to their simplicity of only requiring a linear optimization oracle and more recently they also gained significant traction for online learning.

no code implementations • 6 Oct 2016 • Gábor Braun, Sebastian Pokutta

For the linear bandit problem, we extend the analysis of algorithm CombEXP from [R. Combes, M. S. Talebi Mazraeh Shahi, A. Proutiere, and M. Lelarge.

no code implementations • 1 Sep 2015 • Ruiyang Song, Yao Xie, Sebastian Pokutta

We study the value of information in sequential compressed sensing by characterizing the performance of sequential information guided sensing in practical scenarios when information is inaccurate.

no code implementations • 26 Jan 2015 • Ruiyang Song, Yao Xie, Sebastian Pokutta

We characterize the performance of sequential information guided sensing, Info-Greedy Sensing, when there is a mismatch between the true signal model and the assumed model, which may be a sample estimate.

no code implementations • 2 Jul 2014 • Gabor Braun, Sebastian Pokutta, Yao Xie

We present an information-theoretic framework for sequential adaptive compressed sensing, Info-Greedy Sensing, where measurements are chosen to maximize the extracted information conditioned on the previous measurements.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.