1 code implementation • 3 Mar 2023 • Amine Bennouna, Ryan Lucas, Bart Van Parys
We demonstrate both theoretically as well as empirically the loss to enjoy a certified level of robustness against two common types of corruption--data evasion and poisoning attacks--while ensuring guaranteed generalization.
no code implementations • 19 Jul 2022 • Amine Bennouna, Bart Van Parys
However, we show how in the context of classification and regression problems that several popular regularized and robust formulations reduce to a particular case of our proposed novel formulation.
1 code implementation • 18 Feb 2019 • Dimitris Bertsimas, Jean Pauphilet, Bart Van Parys
A cogent feature selection method is expected to exhibit a two-fold convergence, namely the accuracy and false detection rate should converge to $1$ and $0$ respectively, as the sample size increases.
Methodology
1 code implementation • 27 Nov 2017 • Dimitris Bertsimas, Bart Van Parys
The associated robust prescriptive methods furthermore reduce to convenient tractable convex optimization problems in the context of local learning methods such as nearest neighbors and Nadaraya-Watson learning.
1 code implementation • 3 Oct 2017 • Dimitris Bertsimas, Jean Pauphilet, Bart Van Parys
In this paper, we formulate the sparse classification problem of $n$ samples with $p$ features as a binary convex optimization problem and propose a cutting-plane algorithm to solve it exactly.
Optimization and Control
no code implementations • 28 Sep 2017 • Dimitris Bertsimas, Bart Van Parys
We present a novel binary convex reformulation of the sparse regression problem that constitutes a new duality perspective.
no code implementations • 28 Sep 2017 • Dimitris Bertsimas, Bart Van Parys
The ability of our method to identify all $k$ relevant inputs and all $\ell$ monomial terms is shown empirically to experience a phase transition.