Search Results for author: Huangjie Zheng

Found 27 papers, 17 papers with code

Beta Diffusion

no code implementations14 Sep 2023 Mingyuan Zhou, Tianqi Chen, Zhendong Wang, Huangjie Zheng

We introduce beta diffusion, a novel generative modeling method that integrates demasking and denoising to generate data within bounded ranges.


Class-Balancing Diffusion Models

1 code implementation CVPR 2023 Yiming Qin, Huangjie Zheng, Jiangchao Yao, Mingyuan Zhou, Ya zhang

To tackle this problem, we set from the hypothesis that the data distribution is not class-balanced, and propose Class-Balancing Diffusion Models (CBDM) that are trained with a distribution adjustment regularizer as a solution.

POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained models

1 code implementation29 Apr 2023 Korawat Tanwisuth, Shujian Zhang, Huangjie Zheng, Pengcheng He, Mingyuan Zhou

Through prompting, large-scale pre-trained models have become more expressive and powerful, gaining significant attention in recent years.

Image Classification Natural Language Inference +1

Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models

1 code implementation25 Apr 2023 Zhendong Wang, Yifan Jiang, Huangjie Zheng, Peihao Wang, Pengcheng He, Zhangyang Wang, Weizhu Chen, Mingyuan Zhou

Patch Diffusion meanwhile improves the performance of diffusion models trained on relatively small datasets, $e. g.$, as few as 5, 000 images to train from scratch.

Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond

1 code implementation11 Apr 2023 Mohammadreza Armandpour, Ali Sadeghian, Huangjie Zheng, Amir Sadeghian, Mingyuan Zhou

Although text-to-image diffusion models have made significant strides in generating images from text, they are sometimes more inclined to generate images like the data on which the model was trained rather than the provided text.

Text to 3D

CARD: Classification and Regression Diffusion Models

1 code implementation15 Jun 2022 Xizewen Han, Huangjie Zheng, Mingyuan Zhou

In this paper, we introduce classification and regression diffusion (CARD) models, which combine a denoising diffusion-based conditional generative model and a pre-trained conditional mean estimator, to accurately predict the distribution of $\boldsymbol y$ given $\boldsymbol x$.

Classification Denoising +1

Representing Mixtures of Word Embeddings with Mixtures of Topic Embeddings

2 code implementations ICLR 2022 Dongsheng Wang, Dandan Guo, He Zhao, Huangjie Zheng, Korawat Tanwisuth, Bo Chen, Mingyuan Zhou

This paper introduces a new topic-modeling framework where each document is viewed as a set of word embedding vectors and each topic is modeled as an embedding vector in the same embedding space.

Word Embeddings

A Behavior Regularized Implicit Policy for Offline Reinforcement Learning

no code implementations19 Feb 2022 Shentao Yang, Zhendong Wang, Huangjie Zheng, Yihao Feng, Mingyuan Zhou

For training more effective agents, we propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.

D4RL reinforcement-learning +1

Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders

1 code implementation19 Feb 2022 Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou

Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain.

Mixing and Shifting: Exploiting Global and Local Dependencies in Vision MLPs

2 code implementations14 Feb 2022 Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou

In this paper, to exploit both global and local dependencies without self-attention, we present Mix-Shift-MLP (MS-MLP) which makes the size of the local receptive field used for mixing increase with respect to the amount of spatial shifting.

Alignment Attention by Matching Key and Query Distributions

1 code implementation NeurIPS 2021 Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou

The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains.

Graph Attention Question Answering +1

A Prototype-Oriented Framework for Unsupervised Domain Adaptation

1 code implementation NeurIPS 2021 Korawat Tanwisuth, Xinjie Fan, Huangjie Zheng, Shujian Zhang, Hao Zhang, Bo Chen, Mingyuan Zhou

Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space.

Unsupervised Domain Adaptation

State-Action Joint Regularized Implicit Policy for Offline Reinforcement Learning

no code implementations29 Sep 2021 Shentao Yang, Zhendong Wang, Huangjie Zheng, Mingyuan Zhou

For training more effective agents, we propose a framework that supports learning a flexible and well-regularized policy, which consists of a fully implicit policy and a regularization through the state-action visitation frequency induced by the current policy and that induced by the data-collecting behavior policy.

D4RL reinforcement-learning +1

Crossformer: Transformer with Alternated Cross-Layer Guidance

no code implementations29 Sep 2021 Shujian Zhang, Zhibin Duan, Huangjie Zheng, Pengcheng He, Bo Chen, Weizhu Chen, Mingyuan Zhou

Crossformer with states sharing not only provides the desired cross-layer guidance and regularization but also reduces the memory requirement.

Inductive Bias Machine Translation +3

Contrastive Attraction and Contrastive Repulsion for Representation Learning

1 code implementation8 May 2021 Huangjie Zheng, Xu Chen, Jiangchao Yao, Hongxia Yang, Chunyuan Li, Ya zhang, Hao Zhang, Ivor Tsang, Jingren Zhou, Mingyuan Zhou

We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples.

Contrastive Learning Representation Learning

Exploiting Chain Rule and Bayes' Theorem to Compare Probability Distributions

1 code implementation NeurIPS 2021 Huangjie Zheng, Mingyuan Zhou

The forward CT is the expected cost of moving a source data point to a target one, with their joint distribution defined by the product of the source probability density function (PDF) and a source-dependent conditional distribution, which is related to the target PDF via Bayes' theorem.

Deep Unsupervised Image Anomaly Detection: An Information Theoretic Framework

no code implementations9 Dec 2020 Fei Ye, Huangjie Zheng, Chaoqin Huang, Ya zhang

Based on this object function we introduce a novel information theoretic framework for unsupervised image anomaly detection.

Anomaly Detection

Learning on Attribute-Missing Graphs

3 code implementations3 Nov 2020 Xu Chen, Siheng Chen, Jiangchao Yao, Huangjie Zheng, Ya zhang, Ivor W Tsang

Thereby, designing a new GNN for these graphs is a burning issue to the graph learning community.

Graph Learning Link Prediction

MCMC-Interactive Variational Inference

no code implementations2 Oct 2020 Quan Zhang, Huangjie Zheng, Mingyuan Zhou

Leveraging well-established MCMC strategies, we propose MCMC-interactive variational inference (MIVI) to not only estimate the posterior in a time constrained manner, but also facilitate the design of MCMC transitions.

Variational Inference

ACT: Asymptotic Conditional Transport

no code implementations28 Sep 2020 Huangjie Zheng, Mingyuan Zhou

We propose conditional transport (CT) as a new divergence to measure the difference between two probability distributions.

Node Attribute Generation on Graphs

3 code implementations23 Jul 2019 Xu Chen, Siheng Chen, Huangjie Zheng, Jiangchao Yao, Kenan Cui, Ya zhang, Ivor W. Tsang

NANG learns a unifying latent representation which is shared by both node attributes and graph structures and can be translated to different modalities.

Data Augmentation General Classification +2

Elastic Boundary Projection for 3D Medical Image Segmentation

2 code implementations CVPR 2019 Tianwei Ni, Lingxi Xie, Huangjie Zheng, Elliot K. Fishman, Alan L. Yuille

The key observation is that, although the object is a 3D volume, what we really need in segmentation is to find its boundary which is a 2D surface.

3D Medical Imaging Segmentation Image Segmentation +2

Phase Collaborative Network for Two-Phase Medical Image Segmentation

no code implementations28 Nov 2018 Huangjie Zheng, Lingxi Xie, Tianwei Ni, Ya zhang, Yan-Feng Wang, Qi Tian, Elliot K. Fishman, Alan L. Yuille

However, in medical image analysis, fusing prediction from two phases is often difficult, because (i) there is a domain gap between two phases, and (ii) the semantic labels are not pixel-wise corresponded even for images scanned from the same patient.

Image Segmentation Medical Image Segmentation +3

Understanding VAEs in Fisher-Shannon Plane

no code implementations10 Jul 2018 Huangjie Zheng, Jiangchao Yao, Ya zhang, Ivor W. Tsang, Jia Wang

In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables.

Representation Learning

Degeneration in VAE: in the Light of Fisher Information Loss

no code implementations19 Feb 2018 Huangjie Zheng, Jiangchao Yao, Ya zhang, Ivor W. Tsang

While enormous progress has been made to Variational Autoencoder (VAE) in recent years, similar to other deep networks, VAE with deep networks suffers from the problem of degeneration, which seriously weakens the correlation between the input and the corresponding latent codes, deviating from the goal of the representation learning.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.