Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm.
COMBINATORIAL OPTIMIZATION IMITATION LEARNING VARIABLE SELECTION
We develop a Bayesian "sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fitting and inference are accomplished via an iterative Bayesian backfitting MCMC algorithm that generates samples from a posterior.
We consider a discrete optimization based approach for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features.
In spite of the usefulness of $L_0$-based estimators and generic MIO solvers, there is a steep computational price to pay when compared to popular sparse learning algorithms (e. g., based on $L_1$ regularization).
COMBINATORIAL OPTIMIZATION FEATURE SELECTION SPARSE LEARNING VARIABLE SELECTION
A machine learning method called boosted decision trees (Friedman, 2001) is a good approach for exploratory regression analysis in real data sets because it can detect predictors with nonlinear and interaction effects while also accounting for missing data.
On this combinatorial graph, we propose an ARD diffusion kernel with which the GP is able to model high-order interactions between variables leading to better performance.
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
This paper introduces a machine for sampling approximate model-X knockoffs for arbitrary and unspecified data distributions using deep generative models.
We present an approach to learn SAT solver heuristics from scratch through deep reinforcement learning with a curriculum.
In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization.