Search Results for author: Mahdi Milani Fard

Found 13 papers, 1 papers with code

Distribution Embedding Network for Meta-Learning with Variable-Length Input

no code implementations1 Jan 2021 Lang Liu, Mahdi Milani Fard, Sen Zhao

We propose Distribution Embedding Network (DEN) for meta-learning, which is designed for applications where both the distribution and the number of features could vary across tasks.

Binary Classification Classification +2

Optimizing Black-box Metrics with Adaptive Surrogates

no code implementations ICML 2020 Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.

Constrained Interacting Submodular Groupings

no code implementations ICML 2018 Andrew Cotter, Mahdi Milani Fard, Seungil You, Maya Gupta, Jeff Bilmes

We introduce the problem of grouping a finite ground set into blocks where each block is a subset of the ground set and where: (i) the blocks are individually highly valued by a submodular function (both robustly and in the average case) while satisfying block-specific matroid constraints; and (ii) block scores interact where blocks are jointly scored highly, thus making the blocks mutually non-redundant.

Proxy Fairness

no code implementations28 Jun 2018 Maya Gupta, Andrew Cotter, Mahdi Milani Fard, Serena Wang

We consider the problem of improving fairness when one lacks access to a dataset labeled with protected groups, making it difficult to take advantage of strategies that can improve fairness but require protected group labels, either at training or runtime.

Fairness

Metric-Optimized Example Weights

no code implementations ICLR 2019 Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta

Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.

Fast and Flexible Monotonic Functions with Ensembles of Lattices

no code implementations NeurIPS 2016 Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta

For many machine learning problems, there are some inputs that are known to be positively (or negatively) related to the output, and in such cases training the model to respect that monotonic relationship can provide regularization, and makes the model more interpretable.

Launch and Iterate: Reducing Prediction Churn

no code implementations NeurIPS 2016 Mahdi Milani Fard, Quentin Cormier, Kevin Canini, Maya Gupta

Practical applications of machine learning often involve successive training iterations with changes to features and training examples.

Non-Deterministic Policies in Markovian Decision Processes

no code implementations16 Jan 2014 Mahdi Milani Fard, Joelle Pineau

Although conventional methods in reinforcement learning have proved to be useful in problems concerning sequential decision-making, they cannot be applied in their current form to decision support systems, such as those in medical domains, as they suggest policies that are often highly prescriptive and leave little room for the users input.

Decision Making reinforcement-learning +1

PAC-Bayesian Policy Evaluation for Reinforcement Learning

no code implementations14 Feb 2012 Mahdi Milani Fard, Joelle Pineau, Csaba Szepesvari

PAC-Bayesian methods overcome this problem by providing bounds that hold regardless of the correctness of the prior distribution.

Model Selection reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.