no code implementations • 25 Jun 2024 • Jessica Schrouff, Alexis Bellot, Amal Rannen-Triki, Alan Malek, Isabela Albuquerque, Arthur Gretton, Alexander D'Amour, Silvia Chiappa

Failures of fairness or robustness in machine learning predictive settings can be due to undesired dependencies between covariates, outcomes and auxiliary factors of variation.

no code implementations • 7 Jun 2024 • Virginia Aglietti, Ira Ktena, Jessica Schrouff, Eleni Sgouritsa, Francisco J. R. Ruiz, Alan Malek, Alexis Bellot, Silvia Chiappa

The sample efficiency of Bayesian optimization algorithms depends on carefully crafted acquisition functions (AFs) guiding the sequential collection of function evaluations.

1 code implementation • 13 Jun 2023 • Alan Malek, Virginia Aglietti, Silvia Chiappa

We explore algorithms to select actions in the causal bandit setting where the learner can choose to intervene on a set of random variables related by a causal graph, and the learner sequentially chooses interventions and observes a sample from the interventional distribution.

1 code implementation • 31 May 2023 • Virginia Aglietti, Alan Malek, Ira Ktena, Silvia Chiappa

We propose constrained causal Bayesian optimization (cCBO), an approach for finding interventions in a known causal graph that optimize a target variable under some constraints.

1 code implementation • 28 Jan 2023 • Limor Gultchin, Siyuan Guo, Alan Malek, Silvia Chiappa, Ricardo Silva

We introduce a causal framework for designing optimal policies that satisfy fairness constraints.

no code implementations • NeurIPS 2021 • Alan Malek, Silvia Chiappa

This paper considers the problem of selecting a formula for identifying a causal quantity of interest among a set of available formulas.

no code implementations • 2 Jul 2021 • Jorg Bornschein, Silvia Chiappa, Alan Malek, Rosemary Nan Ke

Learning the structure of Bayesian networks and causal relationships from observations is a common goal in several areas of science and technology.

no code implementations • 6 Jan 2019 • Yasin Abbasi-Yadkori, Peter L. Bartlett, Xi Chen, Alan Malek

Moreover, we propose an efficient algorithm, scaling with the size of the subspace but not the state space, that is able to find a policy with low excess loss relative to the best policy in this class.

no code implementations • NeurIPS 2018 • Alan Malek, Peter L. Bartlett

We consider online linear regression: at each round, an adversary reveals a covariate vector, the learner predicts a real value, the adversary reveals a label, and the learner suffers the squared prediction error.

no code implementations • 26 Feb 2018 • Jason Altschuler, Victor-Emmanuel Brunel, Alan Malek

Specifically, we propose a variant of the Best Arm Identification problem for \emph{contaminated bandits}, where each arm pull has probability $\varepsilon$ of generating a sample from an arbitrary contamination distribution instead of the true underlying distribution.

no code implementations • NeurIPS 2017 • Wojciech Kotlowski, Wouter M. Koolen, Alan Malek

We revisit isotonic regression on linear orders, the problem of fitting monotonic functions to best explain the data, in an online setting.

no code implementations • 19 Oct 2016 • Yasin Abbasi-Yadkori, Peter L. Bartlett, Victor Gabillon, Alan Malek

We propose the Hit-and-Run algorithm for planning and sampling problems in non-convex spaces.

no code implementations • 14 Mar 2016 • Wojciech Kotłowski, Wouter M. Koolen, Alan Malek

We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has a regret bounded by $O\big(T^{1/3} \log^{2/3}(T)\big)$ and present a matching $\Omega(T^{1/3})$ lower bound on regret.

no code implementations • NeurIPS 2015 • Wouter M. Koolen, Alan Malek, Peter L. Bartlett, Yasin Abbasi

We consider an adversarial formulation of the problem ofpredicting a time series with square loss.

no code implementations • NeurIPS 2014 • Wouter M. Koolen, Alan Malek, Peter L. Bartlett

We consider online prediction problems where the loss between the prediction and the outcome is measured by the squared Euclidean distance and its generalization, the squared Mahalanobis distance.

no code implementations • 27 Feb 2014 • Yasin Abbasi-Yadkori, Peter L. Bartlett, Alan Malek

We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.