1 code implementation • 6 Oct 2024 • Yang Zhao, Yixin Wang, Mingzhang Yin
In this work, we propose a novel listwise approach named Ordinal Preference Optimization (OPO), which employs the Normalized Discounted Cumulative Gain (NDCG), a widely-used ranking metric, to better utilize relative proximity within ordinal multiple responses.
1 code implementation • 26 Sep 2024 • Ruijiang Gao, Mingzhang Yin, James McInerney, Nathan Kallus
Conformal Prediction methods have finite-sample distribution-free marginal coverage guarantees.
2 code implementations • 5 Apr 2024 • Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang
This achievement not only redefines the benchmarks for efficiency and effectiveness in diffusion distillation but also in the broader field of diffusion-based generation.
Ranked #2 on Image Generation on AFHQ-v2 64x64
no code implementations • 18 Oct 2023 • Mingzhang Yin, Ruijiang Gao, Weiran Lin, Steven M. Shugan
Cross-pollinating the machine learning and experiment design, GBS is scalable to products with hundreds of attributes and can design personalized products for heterogeneous consumers.
no code implementations • 13 Oct 2023 • Ruijiang Gao, Mingzhang Yin
In addition, we propose a personalized deferral collaboration system to leverage the diverse expertise of different human decision-makers.
no code implementations • 12 Aug 2022 • Russell Z. Kunes, Mingzhang Yin, Max Land, Doron Haviv, Dana Pe'er, Simon Tavaré
Gradient estimation is often necessary for fitting generative models with discrete latent variables, in contexts such as reinforcement learning and variational autoencoder (VAE) training.
1 code implementation • 14 Jun 2022 • Zhendong Wang, Ruijiang Gao, Mingzhang Yin, Mingyuan Zhou, David M. Blei
This paper proposes probabilistic conformal prediction (PCP), a predictive inference algorithm that estimates a target variable by a discontinuous predictive set.
no code implementations • 22 Feb 2022 • Wenshuo Guo, Mingzhang Yin, Yixin Wang, Michael I. Jordan
Directly adjusting for these imperfect measurements of the covariates can lead to biased causal estimates.
1 code implementation • 24 Sep 2021 • Mingzhang Yin, Yixin Wang, David M. Blei
This paper presents a new optimization approach to causal estimation.
1 code implementation • 11 Jun 2020 • Mingzhang Yin, Nhat Ho, Bowei Yan, Xiaoning Qian, Mingyuan Zhou
This paper proposes a novel optimization method to solve the exact L0-regularized regression problem, which is also known as the best subset selection.
Methodology
no code implementations • 21 May 2020 • Siamak Zamani Dadaneh, Shahin Boluki, Mingzhang Yin, Mingyuan Zhou, Xiaoning Qian
Semantic hashing has become a crucial component of fast similarity search in many large-scale information retrieval systems, in particular, for text data.
1 code implementation • 10 Feb 2020 • Yuguang Yue, Yunhao Tang, Mingzhang Yin, Mingyuan Zhou
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension, making it challenging to apply existing on-policy gradient based deep RL algorithms efficiently.
1 code implementation • ICLR 2020 • Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn
If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes.
no code implementations • 29 May 2019 • Mingzhang Yin, Mingyuan Zhou
To combine explicit and implicit generative models, we introduce semi-implicit generator (SIG) as a flexible hierarchical model that can be trained in the maximum likelihood framework.
1 code implementation • 4 May 2019 • Mingzhang Yin, Yuguang Yue, Mingyuan Zhou
To address the challenge of backpropagating the gradient through categorical variables, we propose the augment-REINFORCE-swap-merge (ARSM) gradient estimator that is unbiased and has low variance.
no code implementations • 13 Mar 2019 • Yunhao Tang, Mingzhang Yin, Mingyuan Zhou
Due to the high variance of policy gradients, on-policy optimization algorithms are plagued with low sample efficiency.
1 code implementation • ICLR 2019 • Mingzhang Yin, Mingyuan Zhou
To backpropagate the gradients through stochastic binary layers, we propose the augment-REINFORCE-merge (ARM) estimator that is unbiased, exhibits low variance, and has low computational complexity.
1 code implementation • ICML 2018 • Mingzhang Yin, Mingyuan Zhou
Semi-implicit variational inference (SIVI) is introduced to expand the commonly used analytic variational distribution family, by mixing the variational parameter with a flexible distribution.
no code implementations • NeurIPS 2017 • Bowei Yan, Mingzhang Yin, Purnamrita Sarkar
In this paper, we study convergence properties of the gradient variant of Expectation-Maximization algorithm~\cite{lange1995gradient} for Gaussian Mixture Models for arbitrary number of clusters and mixing coefficients.
no code implementations • 23 May 2017 • Bowei Yan, Mingzhang Yin, Purnamrita Sarkar
In this paper, we study convergence properties of the gradient Expectation-Maximization algorithm \cite{lange1995gradient} for Gaussian Mixture Models for general number of clusters and mixing coefficients.