no code implementations • 2 Feb 2024 • Dimitris Bertsimas, Arthur Delarue, Jean Pauphilet
When training predictive models on data with missing entries, the most widely used and versatile approach is a pipeline technique where we first impute missing entries and then compute predictions.
no code implementations • 25 May 2023 • Liangyuan Na, Kimberly Villalobos Carballo, Jean Pauphilet, Ali Haddad-Sisakht, Daniel Kombert, Melissa Boisjoli-Langlois, Andrew Castiglione, Maram Khalifa, Pooja Hebbal, Barry Stein, Dimitris Bertsimas
Problem definition: Access to accurate predictions of patients' outcomes can enhance medical staff's decision-making, which ultimately benefits all stakeholders in the hospitals.
2 code implementations • 20 May 2023 • Dimitris Bertsimas, Ryan Cory-Wright, Sean Lo, Jean Pauphilet
Low-rank matrix completion consists of computing a matrix of minimal complexity that recovers a given set of observations as accurately as possible.
1 code implementation • 29 Sep 2022 • Ryan Cory-Wright, Jean Pauphilet
We exploit these relaxations and bounds to propose exact methods and rounding mechanisms that, together, obtain solutions with a bound gap on the order of 0%-15% for real-world datasets with p = 100s or 1000s of features and r \in {2, 3} components.
no code implementations • 5 Sep 2022 • Julien Grand-Clément, Jean Pauphilet
Many high-stake decisions follow an expert-in-loop structure in that a human operator receives recommendations from an algorithm but is the ultimate decision maker.
no code implementations • 21 Jun 2021 • Jean Pauphilet
We integrate an adversarial imputation step to allow for robust estimation even in presence of partially observed treatment assignments.
1 code implementation • 12 May 2021 • Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet
We invoke the matrix perspective function - the matrix analog of the perspective function - and characterize explicitly the convex hull of epigraphs of simple matrix convex functions under low-rank constraints.
no code implementations • 7 Apr 2021 • Dimitris Bertsimas, Arthur Delarue, Jean Pauphilet
Missing data is a common issue in real-world datasets.
1 code implementation • 22 Sep 2020 • Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet
We propose a framework for modeling and solving low-rank optimization problems to certifiable optimality.
no code implementations • 30 Jun 2020 • Dimitris Bertsimas, Léonard Boussioux, Ryan Cory Wright, Arthur Delarue, Vassilis Digalakis Jr., Alexandre Jacquillat, Driss Lahlou Kitane, Galit Lukin, Michael Lingzhi Li, Luca Mingardi, Omid Nohadani, Agni Orfanoudaki, Theodore Papalexopoulos, Ivan Paskov, Jean Pauphilet, Omar Skali Lami, Bartolomeo Stellato, Hamza Tazi Bouardi, Kimberly Villalobos Carballo, Holly Wiberg, Cynthia Zeng
Specifically, we propose a comprehensive data-driven approach to understand the clinical characteristics of COVID-19, predict its mortality, forecast its evolution, and ultimately alleviate its impact.
1 code implementation • 11 May 2020 • Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet
Sparse principal component analysis (PCA) is a popular dimensionality reduction technique for obtaining principal components which are linear combinations of a small subset of the original features.
no code implementations • 3 Jul 2019 • Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet
We propose a unified framework to address a family of classical mixed-integer optimization problems with logically constrained decision variables, including network design, facility location, unit commitment, sparse portfolio selection, binary quadratic optimization, sparse principal analysis and sparse learning problems.
no code implementations • 25 Jun 2019 • Dimitris Bertsimas, Jourdain Lamperski, Jean Pauphilet
We consider the maximum likelihood estimation of sparse inverse covariance matrices.
1 code implementation • 18 Feb 2019 • Dimitris Bertsimas, Jean Pauphilet, Bart Van Parys
A cogent feature selection method is expected to exhibit a two-fold convergence, namely the accuracy and false detection rate should converge to $1$ and $0$ respectively, as the sample size increases.
Methodology
1 code implementation • 3 Oct 2017 • Dimitris Bertsimas, Jean Pauphilet, Bart Van Parys
In this paper, we formulate the sparse classification problem of $n$ samples with $p$ features as a binary convex optimization problem and propose a cutting-plane algorithm to solve it exactly.
Optimization and Control