no code implementations • 28 Jul 2023 • Pablo Robles-Granda, Katherine Tsai, Oluwasanmi Koyejo
Probabilistic generative models of graphs are important tools that enable representation and sampling.
no code implementations • 2 Jun 2023 • Zachary Robertson, Oluwasanmi Koyejo
The mechanism's novelty lies in using pairwise comparisons for eliciting information from the bidder, arguably easier for humans than assigning a numerical value.
no code implementations • 2 Jun 2023 • Zachary Robertson, Oluwasanmi Koyejo
Feedback Alignment (FA) methods are biologically inspired local learning rules for training neural networks with reduced communication between layers.
1 code implementation • 24 Mar 2023 • Rylan Schaeffer, Mikail Khona, Zachary Robertson, Akhilan Boopathy, Kateryna Pistunova, Jason W. Rocks, Ila Rani Fiete, Oluwasanmi Koyejo
Double descent is a surprising phenomenon in machine learning, in which as the number of model parameters grows relative to the number of data, test error drops as models grow ever larger into the highly overparameterized (data undersampled) regime.
no code implementations • 21 Dec 2022 • Olawale Salaudeen, Oluwasanmi Koyejo
We propose a Target Conditioned Representation Independence (TCRI) objective for domain generalization.
no code implementations • 7 Dec 2022 • Safinah Ali, Sohini Upadhyay, Gaurush Hiranandani, Elena L. Glassman, Oluwasanmi Koyejo
Specifically, we create a web-based ME interface and conduct a user study that elicits users' preferred metrics in a binary classification setting.
no code implementations • 1 Nov 2022 • Maohao Shen, Bowen Jiang, Jacky Yibo Zhang, Oluwasanmi Koyejo
Active learning enables efficient model training by leveraging interactions between machine learning agents and human annotators.
no code implementations • 11 Oct 2022 • Peiye Zhuang, Liqian Ma, Oluwasanmi Koyejo, Alexander G. Schwing
Recent work on 3D-aware image synthesis has achieved compelling results using advances in neural rendering.
no code implementations • 11 Oct 2022 • Peiye Zhuang, Bliss Chapman, Ran Li, Oluwasanmi Koyejo
We propose synthetic power analyses; a framework for estimating statistical power at various sample sizes, and empirically explore the performance of synthetic power analysis for sample size selection in cognitive neuroscience experiments.
1 code implementation • Algorithms 2022 • Cong Xie, Oluwasanmi Koyejo, Indranil Gupta
Distributed machine learning is primarily motivated by the promise of increased computation power for accelerating training and mitigating privacy concerns.
1 code implementation • 31 May 2022 • Ibrahim Alabdulmohsin, Jessica Schrouff, Oluwasanmi Koyejo
We propose a novel reduction-to-binary (R2B) approach that enforces demographic parity for multiclass classification with non-binary sensitive attributes via a reduction to a sequence of binary debiasing tasks.
no code implementations • 3 Feb 2022 • Xiaojun Xu, Jacky Yibo Zhang, Evelyn Ma, Danny Son, Oluwasanmi Koyejo, Bo Li
We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability.
no code implementations • 2 Feb 2022 • Jessica Schrouff, Natalie Harris, Oluwasanmi Koyejo, Ibrahim Alabdulmohsin, Eva Schnider, Krista Opsahl-Ong, Alex Brown, Subhrajit Roy, Diana Mincu, Christina Chen, Awa Dieng, YuAn Liu, Vivek Natarajan, Alan Karthikesalingam, Katherine Heller, Silvia Chiappa, Alexander D'Amour
Diagnosing and mitigating changes in model fairness under distribution shift is an important component of the safe deployment of machine learning in healthcare settings.
1 code implementation • 19 Oct 2021 • Katherine Tsai, Oluwasanmi Koyejo, Mladen Kolar
Graphs from complex systems often share a partial underlying structure across domains while retaining individual features.
no code implementations • 6 Oct 2021 • Raj Kiriti Velicheti, Derek Xia, Oluwasanmi Koyejo
Federated learning systems that jointly preserve Byzantine robustness and privacy have remained an open problem.
1 code implementation • 18 Feb 2021 • Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Mahdi Milani Fard, Oluwasanmi Koyejo
We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix.
2 code implementations • ICLR 2021 • Peiye Zhuang, Oluwasanmi Koyejo, Alexander G. Schwing
Controllable semantic image editing enables a user to change entire image attributes with a few clicks, e. g., gradually making a summer scene look like it was taken in winter.
no code implementations • 11 Nov 2020 • Katherine Tsai, Mladen Kolar, Oluwasanmi Koyejo
We prove a linear convergence rate up to a nontrivial statistical error for the proposed descent scheme and establish sample complexity guarantees for the estimator.
1 code implementation • 3 Nov 2020 • Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Oluwasanmi Koyejo
Metric elicitation is a recent framework for eliciting classification performance metrics that best reflect implicit user preferences based on the task and context.
no code implementations • NeurIPS 2020 • Cong Xie, Shuai Zheng, Oluwasanmi Koyejo, Indranil Gupta, Mu Li, Haibin Lin
The scalability of Distributed Stochastic Gradient Descent (SGD) is today limited by communication bottlenecks.
1 code implementation • 1 Jul 2020 • Jacky Y. Zhang, Rajiv Khanna, Anastasios Kyrillidis, Oluwasanmi Koyejo
Bayesian coresets have emerged as a promising approach for implementing scalable Bayesian inference.
2 code implementations • 25 Jun 2020 • Kaizhao Liang, Jacky Y. Zhang, Boxin Wang, Zhuolin Yang, Oluwasanmi Koyejo, Bo Li
Knowledge transferability, or transfer learning, has been widely adopted to allow a pre-trained model in the source domain to be effectively adapted to downstream tasks in the target domain.
no code implementations • NeurIPS 2020 • Gaurush Hiranandani, Harikrishna Narasimhan, Oluwasanmi Koyejo
What is a fair performance metric?
no code implementations • ICLR 2020 • Haroun Habeeb, Oluwasanmi Koyejo
We propose the Fixed Grouping Layer (FGL); a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model.
no code implementations • 28 Jan 2020 • Amar Budhiraja, Gaurush Hiranandani, Darshak Chhatbar, Aditya Sinha, Navya Yarrabelly, Ayush Choure, Oluwasanmi Koyejo, Prateek Jain
In this paper, we study the problem of recommendation system where the users and items to be recommended are rich data structures with multiple entity types and with multiple sources of side-information in the form of graphs.
1 code implementation • 22 Jan 2020 • Zengjie Song, Oluwasanmi Koyejo, Jiangshe Zhang
By exploring the real-valued space of the soft target representation, we are able to synthesize novel images with the designated properties.
no code implementations • 25 Dec 2019 • Zengjie Song, Oluwasanmi Koyejo, Jiangshe Zhang
By exploiting the real-valued space of the soft target representations, we are able to synthesize novel images with the designated properties.
1 code implementation • 20 Nov 2019 • Cong Xie, Oluwasanmi Koyejo, Indranil Gupta, Haibin Lin
When scaling distributed training, the communication overhead is often the bottleneck.
no code implementations • NeurIPS 2019 • Jacky Y. Zhang, Rajiv Khanna, Anastasios Kyrillidis, Oluwasanmi Koyejo
Iterative hard thresholding (IHT) is a projected gradient descent algorithm, known to achieve state of the art performance for a wide range of structured estimation problems, such as sparse inference.
1 code implementation • 12 Sep 2019 • Sen Na, Mladen Kolar, Oluwasanmi Koyejo
Differential graphical models are designed to represent the difference between the conditional dependence structures of two groups, thus are of particular interest for scientific investigation.
no code implementations • 24 Aug 2019 • Xiaoyan Wang, Ran Li, Bowei Yan, Oluwasanmi Koyejo
We propose a framework for constructing and analyzing multiclass and multioutput classification metrics, i. e., involving multiple, possibly correlated multiclass labels.
3 code implementations • 22 Jul 2019 • Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, Joydeep Ghosh
We then provide a mechanism to generate the smallest set of changes that will improve an individual's outcome.
no code implementations • 8 Jun 2019 • Sinong Geng, Minhao Yan, Mladen Kolar, Oluwasanmi Koyejo
We propose a partially linear additive Gaussian graphical model (PLA-GGM) for the estimation of associations between random variables distorted by observed confounders.
no code implementations • 31 Oct 2018 • Gaurush Hiranandani, Raghav Somani, Oluwasanmi Koyejo, Sreangsu Acharyya
This non-linear transformation of the rating scale shatters the low-rank structure of the rating matrix, therefore resulting in a poor fit and consequentially, poor recommendations.
no code implementations • 23 Oct 2018 • Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo
Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models.
no code implementations • 16 Oct 2018 • Sinong Geng, Mladen Kolar, Oluwasanmi Koyejo
Empirical results are presented using simulated and real brain imaging data, which suggest that our approach improves precision matrix estimation, as compared to baselines, when confounding is present.
no code implementations • 22 Jun 2018 • Shalmali Joshi, Oluwasanmi Koyejo, Been Kim, Joydeep Ghosh
This work proposes xGEMs or manifold guided exemplars, a framework to understand black-box classifier behavior by exploring the landscape of the underlying data manifold as data points cross decision boundaries.
no code implementations • 5 Jun 2018 • Gaurush Hiranandani, Shant Boodaghians, Ruta Mehta, Oluwasanmi Koyejo
Given a binary prediction problem, which performance metric should the classifier optimize?
no code implementations • ICML 2018 • Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar
Complex performance measures, beyond the popular measure of accuracy, are increasingly being used in the context of binary classification.
1 code implementation • 25 May 2018 • Cong Xie, Oluwasanmi Koyejo, Indranil Gupta
We present Zeno, a technique to make distributed machine learning, particularly Stochastic Gradient Descent (SGD), tolerant to an arbitrary number of faulty workers.
no code implementations • 23 May 2018 • Cong Xie, Oluwasanmi Koyejo, Indranil Gupta
We propose a novel robust aggregation rule for distributed synchronous Stochastic Gradient Descent~(SGD) under a general Byzantine failure model.
no code implementations • 12 Mar 2018 • Cem Subakan, Oluwasanmi Koyejo, Paris Smaragdis
Popular generative model learning methods such as Generative Adversarial Networks (GANs), and Variational Autoencoders (VAE) enforce the latent representation to follow simple distributions such as isotropic Gaussian.
no code implementations • 27 Feb 2018 • Cong Xie, Oluwasanmi Koyejo, Indranil Gupta
We propose three new robust aggregation rules for distributed synchronous Stochastic Gradient Descent~(SGD) under a general Byzantine failure model.
no code implementations • ICLR 2018 • Peiye Zhuang, Alexander G. Schwing, Oluwasanmi Koyejo
Our classification results provide a quantitative evaluation of the quality of the generated images, and also serve as an additional contribution of this manuscript.
1 code implementation • 28 Nov 2017 • Anqi Wu, Oluwasanmi Koyejo, Jonathan W. Pillow
Our approach represents a hierarchical extension of the relevance determination framework, where we add a transformed Gaussian process to model the dependencies between the prior variances of regression weights.
no code implementations • ICML 2017 • Krzysztof Dembczyński, Wojciech Kotłowski, Oluwasanmi Koyejo, Nagarajan Natarajan
Statistical learning theory is at an inflection point enabled by recent advances in understanding and optimizing a wide range of metrics.
no code implementations • NeurIPS 2016 • Suriya Gunasekar, Oluwasanmi Koyejo, Joydeep Ghosh
We propose a novel and efficient algorithm for the collaborative preference completion problem, which involves jointly estimating individualized rankings for a set of entities over a shared set of items, based on a limited number of observed affinity values.
no code implementations • 23 Oct 2016 • Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar
The proposed framework is general, as it applies to both batch and online learning, and to both linear and non-linear models.
no code implementations • 12 Jul 2016 • Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, Oluwasanmi Koyejo
Approximate inference via information projection has been recently introduced as a general-purpose approach for efficient probabilistic inference given sparse variables.
no code implementations • 29 May 2016 • Megasthenis Asteris, Anastasios Kyrillidis, Oluwasanmi Koyejo, Russell Poldrack
Given two sets of variables, derived from a common set of samples, sparse Canonical Correlation Analysis (CCA) seeks linear combinations of a small number of variables in each set, such that the induced canonical variables are maximally correlated.
no code implementations • 14 May 2016 • Avradeep Bhowmik, Joydeep Ghosh, Oluwasanmi Koyejo
We consider a limiting case of generalized linear modeling when the target variables are only known up to permutation, and explore how this relates to permutation testing; a standard technique for assessing statistical dependency.
1 code implementation • 10 Nov 2015 • James M. Shine, Patrick G. Bissett, Peter T. Bell, Oluwasanmi Koyejo, Joshua H. Balsters, Krzysztof J. Gorgolewski, Craig A. Moodie, Russell A. Poldrack
Higher brain function relies upon the ability to flexibly integrate information across specialized communities of brain regions, however it is unclear how this mechanism manifests over time.
Neurons and Cognition
no code implementations • 7 May 2015 • Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, Inderjit S. Dhillon
We provide a general theoretical analysis of expected out-of-sample utility, also referred to as decision-theoretic classification, for non-decomposable binary classification metrics such as F-measure and Jaccard coefficient.
1 code implementation • 22 Nov 2014 • Wesley Tansey, Oluwasanmi Koyejo, Russell A. Poldrack, James G. Scott
We also apply the method to a data set from an fMRI experiment on spatial working memory, where it detects patterns that are much more biologically plausible than those detected by standard FDR-controlling methods.
Methodology Applications Computation
no code implementations • 27 Apr 2014 • Oluwasanmi Koyejo, Cheng Lee, Joydeep Ghosh
Transposable data represents interactions among two sets of entities, and are typically represented as a matrix containing the known interaction values.
no code implementations • 26 Sep 2013 • Oluwasanmi Koyejo, Joydeep Ghosh
We present a novel approach for constrained Bayesian inference.