no code implementations • 4 Feb 2022 • Lang Liu, Mahdi Milani Fard, Sen Zhao
We propose Distribution Embedding Networks (DEN) for classification with small data.
1 code implementation • 18 Feb 2021 • Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Mahdi Milani Fard, Oluwasanmi Koyejo
We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix.
no code implementations • 1 Jan 2021 • Lang Liu, Mahdi Milani Fard, Sen Zhao
We propose Distribution Embedding Network (DEN) for meta-learning, which is designed for applications where both the distribution and the number of features could vary across tasks.
no code implementations • ICML 2020 • Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta
We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.
no code implementations • ICML 2018 • Andrew Cotter, Mahdi Milani Fard, Seungil You, Maya Gupta, Jeff Bilmes
We introduce the problem of grouping a finite ground set into blocks where each block is a subset of the ground set and where: (i) the blocks are individually highly valued by a submodular function (both robustly and in the average case) while satisfying block-specific matroid constraints; and (ii) block scores interact where blocks are jointly scored highly, thus making the blocks mutually non-redundant.
no code implementations • 28 Jun 2018 • Maya Gupta, Andrew Cotter, Mahdi Milani Fard, Serena Wang
We consider the problem of improving fairness when one lacks access to a dataset labeled with protected groups, making it difficult to take advantage of strategies that can improve fairness but require protected group labels, either at training or runtime.
no code implementations • ICLR 2019 • Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta
Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.
no code implementations • NeurIPS 2016 • Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta
For many machine learning problems, there are some inputs that are known to be positively (or negatively) related to the output, and in such cases training the model to respect that monotonic relationship can provide regularization, and makes the model more interpretable.
no code implementations • NeurIPS 2016 • Mahdi Milani Fard, Quentin Cormier, Kevin Canini, Maya Gupta
Practical applications of machine learning often involve successive training iterations with changes to features and training examples.
no code implementations • 16 Jan 2014 • Mahdi Milani Fard, Joelle Pineau
Although conventional methods in reinforcement learning have proved to be useful in problems concerning sequential decision-making, they cannot be applied in their current form to decision support systems, such as those in medical domains, as they suggest policies that are often highly prescriptive and leave little room for the users input.
no code implementations • 1 Dec 2013 • William L. Hamilton, Mahdi Milani Fard, Joelle Pineau
Predictive state representations (PSRs) offer an expressive framework for modelling partially observable systems.
no code implementations • NeurIPS 2013 • Mahdi Milani Fard, Yuri Grinberg, Amir-Massoud Farahmand, Joelle Pineau, Doina Precup
This paper addresses the problem of automatic generation of features for value function approximation in reinforcement learning.
no code implementations • 14 Feb 2012 • Mahdi Milani Fard, Joelle Pineau, Csaba Szepesvari
PAC-Bayesian methods overcome this problem by providing bounds that hold regardless of the correctness of the prior distribution.