You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • EMNLP 2020 • Zhengjue Wang, Zhibin Duan, Hao Zhang, Chaojie Wang, Long Tian, Bo Chen, Mingyuan Zhou

Abstractive document summarization is a comprehensive task including document understanding and summary generation, in which area Transformer-based models have achieved the state-of-the-art performance.

no code implementations • 11 Jan 2023 • Chengzhi Wu, Julius Pfrommer, Mingyuan Zhou, Jürgen Beyerer

We propose a combined generative and contrastive neural architecture for learning latent representations of 3D volumetric shapes.

1 code implementation • 16 Oct 2022 • Yishi Xu, Dongsheng Wang, Bo Chen, Ruiying Lu, Zhibin Duan, Mingyuan Zhou

With the tree-likeness property of hyperbolic space, the underlying semantic hierarchy among words and topics can be better exploited to mine more interpretable topics.

1 code implementation • 12 Oct 2022 • Shentao Yang, Shujian Zhang, Yihao Feng, Mingyuan Zhou

In offline model-based reinforcement learning (offline MBRL), we learn a dynamic model from historically collected data, and subsequently utilize the learned model and fixed datasets for policy learning, without further interacting with the environment.

no code implementations • 9 Oct 2022 • Dandan Guo, Long Tian, He Zhao, Mingyuan Zhou, Hongyuan Zha

A recent solution to this problem is calibrating the distribution of these few sample classes by transferring statistics from the base classes with sufficient examples, where how to decide the transfer weights from base classes to novel classes is the key.

1 code implementation • 20 Sep 2022 • Dongsheng Wang, Yishi Xu, Miaoge Li, Zhibin Duan, Chaojie Wang, Bo Chen, Mingyuan Zhou

We propose a Bayesian generative model for incorporating prior domain knowledge into hierarchical topic modeling.

no code implementations • 12 Sep 2022 • Dongsheng Wang, Chaojie Wang, Bo Chen, Mingyuan Zhou

To build recommender systems that not only consider user-item interactions represented as ordinal variables, but also exploit the social network describing the relationships between the users, we develop a hierarchical Bayesian model termed ordinal graph factor analysis (OGFA), which jointly models user-item and user-user interactions.

1 code implementation • 12 Aug 2022 • Zhendong Wang, Jonathan J Hunt, Mingyuan Zhou

In our approach, we learn an action-value function and we add a term maximizing action-values into the training loss of the conditional diffusion model, which results in a loss that seeks optimal actions that are near the behavior policy.

no code implementations • 5 Aug 2022 • Dandan Guo, Zhuo Li, Meixi Zheng, He Zhao, Mingyuan Zhou, Hongyuan Zha

Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set.

1 code implementation • 15 Jun 2022 • Xizewen Han, Huangjie Zheng, Mingyuan Zhou

In this paper, we introduce classification and regression diffusion (CARD) models, which combine a denoising diffusion-based conditional generative model and a pre-trained conditional mean estimator, to accurately predict the distribution of $\boldsymbol y$ given $\boldsymbol x$.

1 code implementation • 14 Jun 2022 • Zhendong Wang, Ruijiang Gao, Mingzhang Yin, Mingyuan Zhou, David M. Blei

This paper proposes probabilistic conformal prediction (PCP), a predictive inference algorithm that estimates a target variable by a discontinuous predictive set.

1 code implementation • 14 Jun 2022 • Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou

Offline reinforcement learning (RL) extends the paradigm of classical RL algorithms to purely learning from static datasets, without interacting with the underlying environment during the learning process.

2 code implementations • 5 Jun 2022 • Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou

Both the observed and generated data are diffused by the same adaptive diffusion process.

Ranked #1 on Image Generation on AFHQ Wild (FID metric)

no code implementations • Findings (NAACL) 2022 • Shujian Zhang, Chengyue Gong, Xingchao Liu, Pengcheng He, Weizhu Chen, Mingyuan Zhou

Active learning, which effectively collects informative unlabeled data for annotation, reduces the demand for labeled data.

2 code implementations • ICLR 2022 • Dongsheng Wang, Dandan Guo, He Zhao, Huangjie Zheng, Korawat Tanwisuth, Bo Chen, Mingyuan Zhou

This paper introduces a new topic-modeling framework where each document is viewed as a set of word embedding vectors and each topic is modeled as an embedding vector in the same embedding space.

1 code implementation • 19 Feb 2022 • Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou

Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain.

no code implementations • 19 Feb 2022 • Shentao Yang, Zhendong Wang, Huangjie Zheng, Yihao Feng, Mingyuan Zhou

For training more effective agents, we propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.

2 code implementations • 14 Feb 2022 • Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou

In this paper, to exploit both global and local dependencies without self-attention, we present Mix-Shift-MLP (MS-MLP) which makes the size of the local receptive field used for mixing increase with respect to the amount of spatial shifting.

1 code implementation • 7 Feb 2022 • Yilin He, Chaojie Wang, Hao Zhang, Bo Chen, Mingyuan Zhou

This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism.

no code implementations • 15 Dec 2021 • Arman Hasanzadeh, Mohammadreza Armandpour, Ehsan Hajiramezanali, Mingyuan Zhou, Nick Duffield, Krishna Narayanan

By learning distributional representations, we provide uncertainty estimates in downstream graph analytics tasks and increase the expressive power of the predictive model.

no code implementations • NeurIPS 2021 • Mohammadreza Armandpour, Ali Sadeghian, Mingyuan Zhou

The splitting function at each node of CPT is based on the logical disjunction of a community of differently weighted probabilistic linear decision-makers, which also geometrically corresponds to a convex polytope in the covariate space.

1 code implementation • NeurIPS 2021 • Zhibin Duan, Yishi Xu, Bo Chen, Dongsheng Wang, Chaojie Wang, Mingyuan Zhou

Existing deep hierarchical topic models are able to extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organize them into a topic hierarchy.

1 code implementation • NeurIPS 2021 • Alek Dimitriev, Mingyuan Zhou

Accurately backpropagating the gradient through categorical variables is a challenging task that arises in various domains, such as training discrete latent variable models.

1 code implementation • NeurIPS 2021 • Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou

The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains.

1 code implementation • NeurIPS 2021 • Korawat Tanwisuth, Xinjie Fan, Huangjie Zheng, Shujian Zhang, Hao Zhang, Bo Chen, Mingyuan Zhou

Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space.

no code implementations • ICLR 2022 • Dandan Guo, Long Tian, Minghe Zhang, Mingyuan Zhou, Hongyuan Zha

Since our plug-and-play framework can be applied to many meta-learning problems, we further instantiate it to the cases of few-shot classification and implicit meta generative modeling.

no code implementations • 29 Sep 2021 • Shujian Zhang, Zhibin Duan, Huangjie Zheng, Pengcheng He, Bo Chen, Weizhu Chen, Mingyuan Zhou

Crossformer with states sharing not only provides the desired cross-layer guidance and regularization but also reduces the memory requirement.

no code implementations • 29 Sep 2021 • Yilin He, Chaojie Wang, Hao Zhang, Bo Chen, Mingyuan Zhou

In this paper, we introduce a relational graph generative process to model how the observed edges are generated by aggregating the node interactions over multiple overlapping node communities, each of which represents a particular type of relation that contributes to the edges via a logical OR mechanism.

no code implementations • 29 Sep 2021 • Shentao Yang, Zhendong Wang, Huangjie Zheng, Mingyuan Zhou

For training more effective agents, we propose a framework that supports learning a flexible and well-regularized policy, which consists of a fully implicit policy and a regularization through the state-action visitation frequency induced by the current policy and that induced by the data-collecting behavior policy.

1 code implementation • ACL 2021 • Zhibin Duan, Hao Zhang, Chaojie Wang, Zhengjue Wang, Bo Chen, Mingyuan Zhou

As a result, the backbone learns the shared knowledge among all clusters while modulated weights extract the cluster-specific features.

1 code implementation • 30 Jun 2021 • Zhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, Mingyuan Zhou

However, they often assume in the prior that the topics at each layer are independently drawn from the Dirichlet distribution, ignoring the dependencies between the topics both at the same layer and across different layers.

1 code implementation • NeurIPS 2021 • Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.

no code implementations • 9 Jun 2021 • Shujian Zhang, Xinjie Fan, Bo Chen, Mingyuan Zhou

Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.

no code implementations • CVPR 2021 • Xinjie Fan, Qifei Wang, Junjie Ke, Feng Yang, Boqing Gong, Mingyuan Zhou

As a generic tool, the improvement introduced by ASR-Norm is agnostic to the choice of ADA methods.

1 code implementation • 28 May 2021 • Alek Dimitriev, Mingyuan Zhou

ARMS uses a copula to generate any number of mutually antithetic samples.

1 code implementation • 10 May 2021 • Dandan Guo, Ruiying Lu, Bo Chen, Zequn Zeng, Mingyuan Zhou

Inspired by recent successes in integrating semantic topics into this task, this paper develops a plug-and-play hierarchical-topic-guided image paragraph generation framework, which couples a visual extractor with a deep topic model to guide the learning of a language model.

no code implementations • 8 May 2021 • Huangjie Zheng, Xu Chen, Jiangchao Yao, Hongxia Yang, Chunyuan Li, Ya zhang, Hao Zhang, Ivor Tsang, Jingren Zhou, Mingyuan Zhou

We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples.

1 code implementation • CVPR 2021 • Mohammadreza Armandpour, Ali Sadeghian, Chunyuan Li, Mingyuan Zhou

We formulate two desired criteria for the space partitioner that aid the training of our mixture of generators: 1) to produce connected partitions and 2) provide a proxy of distance between partitions and data samples, along with a direction for reducing that distance.

Ranked #5 on Image Generation on ImageNet 64x64

1 code implementation • ICLR 2021 • Xinjie Fan, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, Mingyuan Zhou

However, the quality of uncertainty estimation is highly dependent on the dropout probabilities.

1 code implementation • ICLR 2022 • Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.

no code implementations • 1 Jan 2021 • Ruiying Lu, Bo Chen, Dan dan Guo, Dongsheng Wang, Mingyuan Zhou

Moving beyond conventional Transformers that ignore longer-range word dependencies and contextualize their word representations at the segment level, the proposed method not only captures global semantic coherence of all segments and global word concurrence patterns, but also enriches the representation of each token by adapting it to its local context, which is not limited to the segment it resides in and can be flexibly defined according to the task.

no code implementations • ICCV 2021 • Yuqi Ding, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jinwei Ye

Helmholtz stereopsis (HS) exploits the reciprocity principle of light propagation (i. e., the Helmholtz reciprocity) for 3D reconstruction of surfaces with arbitrary reflectance.

1 code implementation • NeurIPS 2021 • Huangjie Zheng, Mingyuan Zhou

The forward CT is the expected cost of moving a source data point to a target one, with their joint distribution defined by the product of the source probability density function (PDF) and a source-dependent conditional distribution, which is related to the target PDF via Bayes' theorem.

no code implementations • 25 Dec 2020 • Chunyuan Li, Xiujun Li, Lei Zhang, Baolin Peng, Mingyuan Zhou, Jianfeng Gao

Self-supervised pre-training (SSP) employs random image transformations to generate training data for visual representation learning.

Ranked #46 on Self-Supervised Image Classification on ImageNet

no code implementations • NeurIPS 2020 • Chaojie Wang, Hao Zhang, Bo Chen, Dongsheng Wang, Zhengjue Wang, Mingyuan Zhou

To analyze a collection of interconnected documents, relational topic models (RTMs) have been developed to describe both the link structure and document content, exploring their underlying relationships via a single-layer latent representation with limited expressive capability.

1 code implementation • NeurIPS 2020 • Wenchao Chen, Chaojie Wang, Bo Chen, Yicheng Liu, Hao Zhang, Mingyuan Zhou

Incorporating the natural document-sentence-word structure into hierarchical Bayesian modeling, we propose convolutional Poisson gamma dynamical systems (PGDS) that introduce not only word-level probabilistic convolutions, but also sentence-level stochastic temporal transitions.

1 code implementation • 31 Oct 2020 • Ali Lotfi Rezaabad, Rahi Kalantari, Sriram Vishwanath, Mingyuan Zhou, Jonathan Tamir

We show that the existing semi-implicit variational inference objective provably reduces information in the observed graph.

1 code implementation • 21 Oct 2020 • Mohammadreza Armandpour, Mingyuan Zhou

The splitting function at each node of CPT is based on the logical disjunction of a community of differently weighted probabilistic linear decision-makers, which also geometrically corresponds to a convex polytope in the covariate space.

1 code implementation • NeurIPS 2020 • Xinjie Fan, Shujian Zhang, Bo Chen, Mingyuan Zhou

Attention modules, as simple and effective tools, have not only enabled deep neural networks to achieve state-of-the-art results in many domains, but also enhanced their interpretability.

no code implementations • 2 Oct 2020 • Quan Zhang, Huangjie Zheng, Mingyuan Zhou

Leveraging well-established MCMC strategies, we propose MCMC-interactive variational inference (MIVI) to not only estimate the posterior in a time constrained manner, but also facilitate the design of MCMC transitions.

no code implementations • 28 Sep 2020 • Huangjie Zheng, Mingyuan Zhou

We propose conditional transport (CT) as a new divergence to measure the difference between two probability distributions.

no code implementations • 28 Sep 2020 • Dandan Guo, Bo Chen, Wenchao Chen, Chaojie Wang, Hongwei Liu, Mingyuan Zhou

We develop a recurrent gamma belief network (rGBN) for radar automatic target recognition (RATR) based on high-resolution range profile (HRRP), which characterizes the temporal dependence across the range cells of HRRP.

1 code implementation • 25 Jul 2020 • Rahi Kalantari, Mingyuan Zhou

We use the generated random graph, whose number of nonzero-degree nodes is finite, to define both the sparsity pattern and dimension of the latent state transition matrix of a (generalized) linear dynamical system.

3 code implementations • NeurIPS 2020 • Yuguang Yue, Zhendong Wang, Mingyuan Zhou

To improve the sample efficiency of policy-gradient based reinforcement learning algorithms, we propose implicit distributional actor-critic (IDAC) that consists of a distributional critic, built on two deep generator networks (DGNs), and a semi-implicit actor (SIA), powered by a flexible policy distribution.

no code implementations • 15 Jun 2020 • Hao Zhang, Bo Chen, Yulai Cong, Dandan Guo, Hongwei Liu, Mingyuan Zhou

Given a posterior sample of the global parameters, in order to efficiently infer the local latent representations of a document under DATM across all stochastic layers, we propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a Weibull distribution based stochastic downward generative model.

1 code implementation • 11 Jun 2020 • Mingzhang Yin, Nhat Ho, Bowei Yan, Xiaoning Qian, Mingyuan Zhou

This paper proposes a novel optimization method to solve the exact L0-regularized regression problem, which is also known as the best subset selection.

Methodology

1 code implementation • ICML 2020 • Arman Hasanzadeh, Ehsan Hajiramezanali, Shahin Boluki, Mingyuan Zhou, Nick Duffield, Krishna Narayanan, Xiaoning Qian

We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs) that generalizes existing stochastic regularization methods for training GNNs.

no code implementations • 21 May 2020 • Siamak Zamani Dadaneh, Shahin Boluki, Mingzhang Yin, Mingyuan Zhou, Xiaoning Qian

Semantic hashing has become a crucial component of fast similarity search in many large-scale information retrieval systems, in particular, for text data.

1 code implementation • ICLR 2020 • Liangjian Wen, Yiji Zhou, Lirong He, Mingyuan Zhou, Zenglin Xu

To this end, we propose the Mutual Information Gradient Estimator (MIGE) for representation learning based on the score estimation of implicit distributions.

no code implementations • 12 Feb 2020 • Shahin Boluki, Randy Ardywibowo, Siamak Zamani Dadaneh, Mingyuan Zhou, Xiaoning Qian

In this work, we propose learnable Bernoulli dropout (LBD), a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.

1 code implementation • 10 Feb 2020 • Yuguang Yue, Yunhao Tang, Mingzhang Yin, Mingyuan Zhou

Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension, making it challenging to apply existing on-policy gradient based deep RL algorithms efficiently.

1 code implementation • ICLR 2020 • Xinjie Fan, Yizhe Zhang, Zhendong Wang, Mingyuan Zhou

To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control.

1 code implementation • ICML 2020 • Dandan Guo, Bo Chen, Ruiying Lu, Mingyuan Zhou

To simultaneously capture syntax and global semantics from a text corpus, we propose a new larger-context recurrent neural network (RNN) based language model, which extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation.

1 code implementation • ICLR 2020 • Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn

If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes.

no code implementations • 2 Nov 2019 • Quan Zhang, Qiang Gao, Mingfeng Lin, Mingyuan Zhou

Specifically, we study time to death of three types of lymphoma and show the potential of WDR in modeling nonlinear covariate effects and discovering new diseases.

Survival Analysis Methodology

no code implementations • 1 Nov 2019 • Siamak Zamani Dadaneh, Shahin Boluki, Mingyuan Zhou, Xiaoning Qian

Learning-to-rank methods can generally be categorized into pointwise, pairwise, and listwise approaches.

1 code implementation • ICML 2020 • Zhendong Wang, Mingyuan Zhou

Variational inference is used to approximate the posterior of the local variable, and semi-implicit structure is further introduced to enhance its expressiveness.

1 code implementation • NeurIPS 2019 • Aaron Schein, Scott W. Linderman, Mingyuan Zhou, David M. Blei, Hanna Wallach

This paper presents the Poisson-randomized gamma dynamical system (PRGDS), a model for sequentially observed count tensors that encodes a strong inductive bias toward sparsity and burstiness.

no code implementations • 28 Oct 2019 • Ehsan Hajiramezanali, Arman Hasanzadeh, Nick Duffield, Krishna Narayanan, Mingyuan Zhou, Xiaoning Qian

Stochastic recurrent neural networks with latent random variables of complex dependency structures have shown to be more successful in modeling sequential data than deterministic deep models.

no code implementations • 18 Oct 2019 • Wenyuan Li, Zichen Wang, Yuguang Yue, Jiayun Li, William Speier, Mingyuan Zhou, Corey W. Arnold

In this work, we investigate semi-supervised learning (SSL) for image classification using adversarial training.

no code implementations • 25 Sep 2019 • Dandan Guo, Bo Chen, Ruiying Lu, Mingyuan Zhou

To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation.

2 code implementations • NeurIPS 2019 • Ehsan Hajiramezanali, Arman Hasanzadeh, Nick Duffield, Krishna R. Narayanan, Mingyuan Zhou, Xiaoning Qian

Representation learning over graph structured data has been mostly studied in static graph settings while efforts for modeling dynamic graphs are still scant.

Ranked #2 on Dynamic Link Prediction on DBLP Temporal

1 code implementation • NeurIPS 2019 • Arman Hasanzadeh, Ehsan Hajiramezanali, Nick Duffield, Krishna R. Narayanan, Mingyuan Zhou, Xiaoning Qian

Compared to VGAE, the derived graph latent representations by SIG-VAE are more interpretable, due to more expressive generative model and more faithful inference enabled by the flexible semi-implicit construction.

no code implementations • 29 May 2019 • Mingzhang Yin, Mingyuan Zhou

To combine explicit and implicit generative models, we introduce semi-implicit generator (SIG) as a flexible hierarchical model that can be trained in the maximum likelihood framework.

1 code implementation • ICLR 2020 • Hao Zhang, Bo Chen, Long Tian, Zhengjue Wang, Mingyuan Zhou

For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework.

1 code implementation • 14 May 2019 • Chaojie Wang, Bo Chen, Sucheng Xiao, Mingyuan Zhou

For text analysis, one often resorts to a lossy representation that either completely ignores word order or embeds each word as a low-dimensional dense feature vector.

1 code implementation • 4 May 2019 • Mingzhang Yin, Yuguang Yue, Mingyuan Zhou

To address the challenge of backpropagating the gradient through categorical variables, we propose the augment-REINFORCE-swap-merge (ARSM) gradient estimator that is unbiased and has low variance.

1 code implementation • 2 May 2019 • He Zhao, Piyush Rai, Lan Du, Wray Buntine, Mingyuan Zhou

Many applications, such as text modelling, high-throughput sequencing, and recommender systems, require analysing sparse, high-dimensional, and overdispersed discrete (count-valued or binary) data.

no code implementations • ICLR 2019 • Hao Zhang, Bo Chen, Long Tian, Zhengjue Wang, Mingyuan Zhou

To extract and relate visual and linguistic concepts from images and textual descriptions for text-based zero-shot learning (ZSL), we develop variational hetero-encoder (VHE) that decodes text via a deep probabilisitic topic model, the variational posterior of whose local latent variables is encoded from an image via a Weibull distribution based inference network.

no code implementations • 13 Apr 2019 • Rajat Panda, Ankit Pensia, Nikhil Mehta, Mingyuan Zhou, Piyush Rai

We present a probabilistic framework for multi-label learning based on a deep generative model for the binary label vector associated with each observation.

no code implementations • 9 Apr 2019 • Mingyuan Zhou, Yu Ji, Yuqi Ding, Jinwei Ye, S. Susan Young, Jingyi Yu

In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot.

no code implementations • 4 Apr 2019 • Zhang Chen, Yu Ji, Mingyuan Zhou, Sing Bing Kang, Jingyi Yu

We avoid the need for spatial constancy of albedo; instead, we use a new measure for albedo similarity that is based on the albedo norm profile.

no code implementations • 13 Mar 2019 • Yunhao Tang, Mingzhang Yin, Mingyuan Zhou

Due to the high variance of policy gradients, on-policy optimization algorithms are plagued with low sample efficiency.

2 code implementations • NeurIPS 2018 • He Zhao, Lan Du, Wray Buntine, Mingyuan Zhou

Recently, considerable research effort has been devoted to developing deep architectures for topic models to learn topic structures.

no code implementations • NeurIPS 2018 • Dandan Guo, Bo Chen, Hao Zhang, Mingyuan Zhou

We develop deep Poisson-gamma dynamical systems (DPGDS) to model sequentially observed multivariate count data, improving previously proposed models by not only mining deep hierarchical latent structure from the data, but also capturing both first-order and long-range temporal dependencies.

no code implementations • NeurIPS 2018 • Ehsan Hajiramezanali, Siamak Zamani Dadaneh, Alireza Karbalayghareh, Mingyuan Zhou, Xiaoning Qian

Second, compared to the number of involved molecules and system complexity, the number of available samples for studying complex disease, such as cancer, is often limited, especially considering disease heterogeneity.

1 code implementation • NeurIPS 2018 • Quan Zhang, Mingyuan Zhou

We propose Lomax delegate racing (LDR) to explicitly model the mechanism of survival under competing risks and to interpret how the covariates accelerate or decelerate the time to event.

1 code implementation • ICLR 2019 • Mingzhang Yin, Mingyuan Zhou

To backpropagate the gradients through stochastic binary layers, we propose the augment-REINFORCE-merge (ARM) estimator that is unbiased, exhibits low variance, and has low computational complexity.

1 code implementation • ICML 2018 • He Zhao, Lan Du, Wray Buntine, Mingyuan Zhou

One important task of topic modeling for text analysis is interpretability.

1 code implementation • ICML 2018 • Mingzhang Yin, Mingyuan Zhou

Semi-implicit variational inference (SIVI) is introduced to expand the commonly used analytic variational distribution family, by mixing the variational parameter with a flexible distribution.

2 code implementations • NeurIPS 2018 • Mingyuan Zhou

Combining Bayesian nonparametrics and a forward model selection strategy, we construct parsimonious Bayesian deep networks (PBDNs) that infer capacity-regularized network architectures from the data and require neither cross-validation nor fine-tuning when training the model.

2 code implementations • NeurIPS 2018 • Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama

It is important to learn various types of classifiers given training data with noisy labels.

Ranked #39 on Image Classification on Clothing1M (using extra training data)

1 code implementation • 22 Mar 2018 • Aaron Schein, Zhiwei Steven Wu, Alexandra Schofield, Mingyuan Zhou, Hanna Wallach

We present a general method for privacy-preserving Bayesian inference in Poisson factorization, a broad class of models that includes some of the most widely used models in the social sciences.

no code implementations • 7 Mar 2018 • Ehsan Hajiramezanali, Siamak Zamani Dadaneh, Paul de Figueiredo, Sing-Hoi Sze, Mingyuan Zhou, Xiaoning Qian

Next-generation sequencing (NGS) to profile temporal changes in living systems is gaining more attention for deriving better insights into the underlying biological mechanisms compared to traditional static sequencing experiments.

1 code implementation • ICLR 2018 • Hao Zhang, Bo Chen, Dandan Guo, Mingyuan Zhou

To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and autoencoding variational Bayes.

no code implementations • 21 Feb 2018 • Rahi Kalantari, Joydeep Ghosh, Mingyuan Zhou

A nonparametric Bayesian sparse graph linear dynamical system (SGLDS) is proposed to model sequentially observed multivariate data.

no code implementations • ICML 2017 • Yulai Cong, Bo Chen, Hongwei Liu, Mingyuan Zhou

It is challenging to develop stochastic gradient based scalable inference for deep discrete latent variable models (LVMs), due to the difficulties in not only computing the gradients, but also adapting the step sizes to different latent factors and hidden layers.

1 code implementation • 19 Jan 2017 • Aaron Schein, Mingyuan Zhou, Hanna Wallach

We introduce a new dynamical system for sequentially observed multivariate count data.

no code implementations • 30 Dec 2016 • Quan Zhang, Mingyuan Zhou

To model categorical response variables given their covariates, we propose a permuted and augmented stick-breaking (paSB) construction that one-to-one maps the observed categories to randomly permuted latent sticks.

1 code implementation • NeurIPS 2016 • Aaron Schein, Hanna Wallach, Mingyuan Zhou

This paper presents a dynamical system based on the Poisson-Gamma construction for sequentially observed multivariate count data.

no code implementations • 23 Aug 2016 • Mingyuan Zhou

To construct flexible nonlinear predictive distributions, the paper introduces a family of softplus function based regression models that convolve, stack, or combine both operations by convolving countably infinite stacked gamma distributions, whose scales depend on the covariates.

1 code implementation • 6 Jun 2016 • Aaron Schein, Mingyuan Zhou, David M. Blei, Hanna Wallach

We introduce Bayesian Poisson Tucker decomposition (BPTD) for modeling country--country interaction event data.

no code implementations • CVPR 2016 • Nianyi Li, Haiting Lin, Bilin Sun, Mingyuan Zhou, Jingyi Yu

In this paper, we present a novel LF sampling scheme by exploiting a special non-centric camera called the crossed-slit or XSlit camera.

no code implementations • 25 Apr 2016 • Mingyuan Zhou

A common approach to analyze a covariate-sample count matrix, an element of which represents how many times a covariate appears in a sample, is to factorize it under the Poisson likelihood.

no code implementations • 30 Dec 2015 • Ayan Acharya, Joydeep Ghosh, Mingyuan Zhou

A gamma process dynamic Poisson factor analysis model is proposed to factorize a dynamic count matrix, whose columns are sequentially observed count vectors.

no code implementations • 9 Dec 2015 • Mingyuan Zhou, Yulai Cong, Bo Chen

To infer multilayer deep representations of high-dimensional discrete and nonnegative real vectors, we propose an augmentable gamma belief network (GBN) that factorizes each of its hidden layers into the product of a sparse connection weight matrix and the nonnegative real hidden units of the next layer.

no code implementations • NeurIPS 2015 • Mingyuan Zhou, Yulai Cong, Bo Chen

Example results on text analysis illustrate interesting relationships between the width of the first layer and the inferred network structure, and demonstrate that the PGBN, whose hidden units are imposed with correlated gamma priors, can add more layers to increase its performance gains over Poisson factor analysis, given the same limit on the width of the first layer.

no code implementations • 25 Jan 2015 • Mingyuan Zhou

A hierarchical gamma process infinite edge partition model is proposed to factorize the binary adjacency matrix of an unweighted undirected relational network under a Bernoulli-Poisson link.

no code implementations • NeurIPS 2014 • Mingyuan Zhou

The beta-negative binomial process (BNBP), an integer-valued stochastic process, is employed to partition a count vector into a latent random count matrix.

no code implementations • 28 Oct 2014 • Mingyuan Zhou

The beta-negative binomial process (BNBP), an integer-valued stochastic process, is employed to partition a count vector into a latent random count matrix.

no code implementations • 12 Apr 2014 • Mingyuan Zhou, Oscar Hernan Madrid Padilla, James G. Scott

We define a family of probability distributions for random count matrices with a potentially unbounded number of rows and columns.

no code implementations • 7 Oct 2013 • Mingyuan Zhou

The paper introduces the concept of a cluster structure to define a joint distribution of the sample size and its exchangeable random partitions.

no code implementations • NeurIPS 2012 • Mingyuan Zhou, Lawrence Carin

By developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework.

1 code implementation • 15 Sep 2012 • Mingyuan Zhou, Lawrence Carin

A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling.

no code implementations • NeurIPS 2009 • Mingyuan Zhou, Haojun Chen, Lu Ren, Guillermo Sapiro, Lawrence Carin, John W. Paisley

The beta process is employed as a prior for learning the dictionary, and this non-parametric method naturally infers an appropriate dictionary size.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.