no code implementations • NeurIPS 2020 • Forest Yang, Mouhamadou Cisse, Oluwasanmi O. Koyejo
In algorithmically fair prediction problems, a standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
no code implementations • NeurIPS 2019 • Gaurush Hiranandani, Shant Boodaghians, Ruta Mehta, Oluwasanmi O. Koyejo
Metric Elicitation is a principled framework for selecting the performance metric that best reflects implicit user preferences.
no code implementations • ICLR 2018 • Cong Xie, Oluwasanmi O. Koyejo, Indranil Gupta
Distributed training of deep learning is widely conducted with large neural networks and large datasets.
no code implementations • NeurIPS 2016 • Been Kim, Rajiv Khanna, Oluwasanmi O. Koyejo
Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions.
no code implementations • NeurIPS 2016 • Timothy Rubin, Oluwasanmi O. Koyejo, Michael N. Jones, Tal Yarkoni
This paper presents Generalized Correspondence-LDA (GC-LDA), a generalization of the Correspondence-LDA model that allows for variable spatial representations to be associated with topics, and increased flexibility in terms of the strength of the correspondence between data types induced by the model.
no code implementations • NeurIPS 2015 • Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon
In particular, we show that for multilabel metrics constructed as instance-, micro- and macro-averages, the population optimal classifier can be decomposed into binary classifiers based on the marginal instance-conditional distribution of each label, with a weak association between labels via the threshold.
no code implementations • NeurIPS 2014 • Oluwasanmi O. Koyejo, Rajiv Khanna, Joydeep Ghosh, Russell Poldrack
In cases where this projection is intractable, we propose a family of parameterized approximations indexed by subsets of the domain.
no code implementations • NeurIPS 2014 • Anqi Wu, Mijung Park, Oluwasanmi O. Koyejo, Jonathan W. Pillow
Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), model parameters as independent a priori, and therefore do not exploit such dependencies.
no code implementations • NeurIPS 2014 • Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon
We consider a fairly large family of performance metrics given by ratios of linear combinations of the four fundamental population quantities.