An Online-Learning Approach to Inverse Optimization

30 Oct 2018  ·  Andreas Bärmann, Alexander Martin, Sebastian Pokutta, Oskar Schneider ·

In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker's corresponding decisions over multiple rounds. We present exact algorithms for this online version of inverse optimization which converge at a rate of $ \mathcal{O}(1/\sqrt{T}) $ in the number of observations~$T$ and compare their further properties. Especially, they all allow taking decisions which are essentially as good as those of the observed decision-maker already after relatively few iterations, but are suited best for different settings each. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, it generalizes previous approaches based on KKT-system decomposition and dualization. We also introduce several generalizations, such as the approximate learning of non-linear objective functions, dynamically changing as well as parameterized objectives and the case of suboptimal observed decisions. When applied to the stochastic offline case, our algorithms are able to give guarantees on the quality of the learned objectives in expectation. Finally, we show the effectiveness and possible applications of our methods in indicative computational experiments.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here