Search Results for author: Michael Jordan

Found 24 papers, 9 papers with code

Classifier Calibration with ROC-Regularized Isotonic Regression

no code implementations21 Nov 2023 Eugene Berta, Francis Bach, Michael Jordan

IR acts as an adaptive binning procedure, which allows achieving a calibration error of zero, but leaves open the issue of the effect on performance.

Classifier calibration regression

Provably Personalized and Robust Federated Learning

1 code implementation14 Jun 2023 Mariel Werner, Lie He, Michael Jordan, Martin Jaggi, Sai Praneeth Karimireddy

Identifying clients with similar objectives and learning a model-per-cluster is an intuitive and interpretable approach to personalization in federated learning.

Clustering Personalized Federated Learning +1

Neural Dependencies Emerging from Learning Massive Categories

no code implementations CVPR 2023 Ruili Feng, Kecheng Zheng, Kai Zhu, Yujun Shen, Jian Zhao, Yukun Huang, Deli Zhao, Jingren Zhou, Michael Jordan, Zheng-Jun Zha

Through investigating the properties of the problem solution, we confirm that neural dependency is guaranteed by a redundant logit covariance matrix, which condition is easily met given massive categories, and that neural dependency is highly sparse, implying that one category correlates to only a few others.

Image Classification

Rank Diminishing in Deep Neural Networks

no code implementations13 Jun 2022 Ruili Feng, Kecheng Zheng, Yukun Huang, Deli Zhao, Michael Jordan, Zheng-Jun Zha

By virtue of our numerical tools, we provide the first empirical analysis of the per-layer behavior of network rank in practical settings, i. e., ResNets, deep MLPs, and Transformers on ImageNet.

Can Reinforcement Learning Efficiently Find Stackelberg-Nash Equilibria in General-Sum Markov Games?

no code implementations29 Sep 2021 Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael Jordan

To our best knowledge, we establish the first provably efficient RL algorithms for solving SNE in general-sum Markov games with leader-controlled state transitions.

Reinforcement Learning (RL)

Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations

no code implementations NeurIPS 2020 Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael Jordan

Reinforcement learning (RL) algorithms combined with modern function approximators such as kernel functions and deep neural networks have achieved significant empirical successes in large-scale application problems with a massive number of states.

reinforcement-learning Reinforcement Learning (RL)

Towards Accurate Model Selection in Deep Unsupervised Domain Adaptation

2 code implementations International Conference on Machine Learning 2019 Kaichao You, Ximei Wang, Mingsheng Long, Michael Jordan

Deep unsupervised domain adaptation (Deep UDA) methods successfully leverage rich labeled data in a source domain to boost the performance on related but unlabeled data in a target domain.

Model Selection Unsupervised Domain Adaptation

SAFFRON: an adaptive algorithm for online control of the false discovery rate

1 code implementation ICML 2018 Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, Michael Jordan

However, unlike older methods, SAFFRON's threshold sequence is based on a novel estimate of the alpha fraction that it allocates to true null hypotheses.

A Berkeley View of Systems Challenges for AI

no code implementations15 Dec 2017 Ion Stoica, Dawn Song, Raluca Ada Popa, David Patterson, Michael W. Mahoney, Randy Katz, Anthony D. Joseph, Michael Jordan, Joseph M. Hellerstein, Joseph E. Gonzalez, Ken Goldberg, Ali Ghodsi, David Culler, Pieter Abbeel

With the increasing commoditization of computer vision, speech recognition and machine translation systems and the widespread deployment of learning-based back-end technologies such as digital advertising and intelligent infrastructures, AI (Artificial Intelligence) has moved from research labs to production.

Machine Translation speech-recognition +1

A deep generative model for gene expression profiles from single-cell RNA sequencing

2 code implementations7 Sep 2017 Romain Lopez, Jeffrey Regier, Michael Cole, Michael Jordan, Nir Yosef

We also extend our framework to account for batch effects and other confounding factors, and propose a Bayesian hypothesis test for differential expression that outperforms DESeq2.

Stochastic Optimization Variational Inference

Saturating Splines and Feature Selection

no code implementations21 Sep 2016 Nicholas Boyd, Trevor Hastie, Stephen Boyd, Benjamin Recht, Michael Jordan

We extend the adaptive regression spline model by incorporating saturation, the natural requirement that a function extend as a constant outside a certain range.

Additive models feature selection

Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences

no code implementations NeurIPS 2016 Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael Jordan

Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians.

Open-Ended Question Answering

High-Dimensional Continuous Control Using Generalized Advantage Estimation

17 code implementations8 Jun 2015 John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel

Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks.

Continuous Control Policy Gradient Methods +1

Nonparametric Link Prediction in Dynamic Networks

no code implementations27 Jun 2012 Purnamrita Sarkar, Deepayan Chakrabarti, Michael Jordan

We propose a non-parametric link prediction algorithm for a sequence of graph snapshots over time.

Link Prediction

Variational Bayesian Inference with Stochastic Search

no code implementations27 Jun 2012 John Paisley, David Blei, Michael Jordan

This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution.

Bayesian Inference Stochastic Optimization +1

Nonparametric Link Prediction in Large Scale Dynamic Networks

no code implementations6 Sep 2011 Purnamrita Sarkar, Deepayan Chakrabarti, Michael Jordan

We propose a nonparametric approach to link prediction in large-scale dynamic networks.

Link Prediction

Learning Semantic Correspondences with Less Supervision

1 code implementation1 Aug 2009 Percy Liang, Michael Jordan, Dan Klein

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state.

Language Acquisition

Cannot find the paper you are looking for? You can Submit a new open access paper.