Search Results for author: Long Tian

Found 7 papers, 2 papers with code

Friendly Topic Assistant for Transformer Based Abstractive Summarization

no code implementations EMNLP 2020 Zhengjue Wang, Zhibin Duan, Hao Zhang, Chaojie Wang, Long Tian, Bo Chen, Mingyuan Zhou

Abstractive document summarization is a comprehensive task including document understanding and summary generation, in which area Transformer-based models have achieved the state-of-the-art performance.

Abstractive Text Summarization Document Summarization +2

Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection

1 code implementation NeurIPS 2023 Ruiying Lu, Yujie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, Ruimin Hu

First, instead of learning the continuous representations, we preserve the typical normal patterns as discrete iconic prototypes, and confirm the importance of Vector Quantization in preventing the model from falling into the shortcut.

Quantization Unsupervised Anomaly Detection

Prototypes-oriented Transductive Few-shot Learning with Conditional Transport

no code implementations ICCV 2023 Long Tian, Jingyi Feng, Wenchao Chen, Xiaoqiang Chai, Liming Wang, Xiyang Liu, Bo Chen

Transductive Few-Shot Learning (TFSL) has recently attracted increasing attention since it typically outperforms its inductive peer by leveraging statistics of query samples.

Few-Shot Learning

Adaptive Distribution Calibration for Few-Shot Learning with Hierarchical Optimal Transport

no code implementations9 Oct 2022 Dandan Guo, Long Tian, He Zhao, Mingyuan Zhou, Hongyuan Zha

A recent solution to this problem is calibrating the distribution of these few sample classes by transferring statistics from the base classes with sufficient examples, where how to decide the transfer weights from base classes to novel classes is the key.

Domain Generalization Few-Shot Learning

Learning Prototype-oriented Set Representations for Meta-Learning

no code implementations ICLR 2022 Dandan Guo, Long Tian, Minghe Zhang, Mingyuan Zhou, Hongyuan Zha

Since our plug-and-play framework can be applied to many meta-learning problems, we further instantiate it to the cases of few-shot classification and implicit meta generative modeling.

Meta-Learning

Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling

1 code implementation ICLR 2020 Hao Zhang, Bo Chen, Long Tian, Zhengjue Wang, Mingyuan Zhou

For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework.

Generative Adversarial Network

VHEGAN: Variational Hetero-Encoder Randomized GAN for Zero-Shot Learning

no code implementations ICLR 2019 Hao Zhang, Bo Chen, Long Tian, Zhengjue Wang, Mingyuan Zhou

To extract and relate visual and linguistic concepts from images and textual descriptions for text-based zero-shot learning (ZSL), we develop variational hetero-encoder (VHE) that decodes text via a deep probabilisitic topic model, the variational posterior of whose local latent variables is encoded from an image via a Weibull distribution based inference network.

Image Generation Retrieval +3

Cannot find the paper you are looking for? You can Submit a new open access paper.