no code implementations • 10 Apr 2024 • Zhuo Li, He Zhao, Zhen Li, Tongliang Liu, Dandan Guo, Xiang Wan
To solve the joint issue of long-tailed distribution and label noise, most previous works usually aim to design a noise detector to distinguish the noisy and clean samples.
no code implementations • 9 Oct 2022 • Dandan Guo, Long Tian, He Zhao, Mingyuan Zhou, Hongyuan Zha
A recent solution to this problem is calibrating the distribution of these few sample classes by transferring statistics from the base classes with sufficient examples, where how to decide the transfer weights from base classes to novel classes is the key.
no code implementations • 5 Aug 2022 • Dandan Guo, Zhuo Li, Meixi Zheng, He Zhao, Mingyuan Zhou, Hongyuan Zha
Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set.
2 code implementations • ICLR 2022 • Dongsheng Wang, Dandan Guo, He Zhao, Huangjie Zheng, Korawat Tanwisuth, Bo Chen, Mingyuan Zhou
This paper introduces a new topic-modeling framework where each document is viewed as a set of word embedding vectors and each topic is modeled as an embedding vector in the same embedding space.
no code implementations • ICLR 2022 • Dandan Guo, Long Tian, Minghe Zhang, Mingyuan Zhou, Hongyuan Zha
Since our plug-and-play framework can be applied to many meta-learning problems, we further instantiate it to the cases of few-shot classification and implicit meta generative modeling.
1 code implementation • 10 May 2021 • Dandan Guo, Ruiying Lu, Bo Chen, Zequn Zeng, Mingyuan Zhou
Inspired by recent successes in integrating semantic topics into this task, this paper develops a plug-and-play hierarchical-topic-guided image paragraph generation framework, which couples a visual extractor with a deep topic model to guide the learning of a language model.
no code implementations • 28 Sep 2020 • Dandan Guo, Bo Chen, Wenchao Chen, Chaojie Wang, Hongwei Liu, Mingyuan Zhou
We develop a recurrent gamma belief network (rGBN) for radar automatic target recognition (RATR) based on high-resolution range profile (HRRP), which characterizes the temporal dependence across the range cells of HRRP.
no code implementations • 15 Jun 2020 • Hao Zhang, Bo Chen, Yulai Cong, Dandan Guo, Hongwei Liu, Mingyuan Zhou
Given a posterior sample of the global parameters, in order to efficiently infer the local latent representations of a document under DATM across all stochastic layers, we propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a Weibull distribution based stochastic downward generative model.
1 code implementation • ICML 2020 • Dandan Guo, Bo Chen, Ruiying Lu, Mingyuan Zhou
To simultaneously capture syntax and global semantics from a text corpus, we propose a new larger-context recurrent neural network (RNN) based language model, which extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation.
no code implementations • 25 Sep 2019 • Dandan Guo, Bo Chen, Ruiying Lu, Mingyuan Zhou
To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation.
no code implementations • NeurIPS 2018 • Dandan Guo, Bo Chen, Hao Zhang, Mingyuan Zhou
We develop deep Poisson-gamma dynamical systems (DPGDS) to model sequentially observed multivariate count data, improving previously proposed models by not only mining deep hierarchical latent structure from the data, but also capturing both first-order and long-range temporal dependencies.
1 code implementation • ICLR 2018 • Hao Zhang, Bo Chen, Dandan Guo, Mingyuan Zhou
To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and autoencoding variational Bayes.