Search Results for author: Wenxian Shi

Found 8 papers, 4 papers with code

Dispersed EM-VAEs for Interpretable Text Generation

no code implementations ICML 2020 Wenxian Shi, Hao Zhou, Ning Miao, Lei LI

Interpretability is important in text generation for guiding the generation with interpretable attributes.

Text Generation

Functional Geometry Guided Protein Sequence and Backbone Structure Co-Design

1 code implementation6 Oct 2023 Zhenqiao Song, Yunlong Zhao, Wenxian Shi, Yang Yang, Lei LI

In this paper, we propose NAEPro, a model to jointly design Protein sequence and structure based on automatically detected functional sites.

Joint Design of Protein Sequence and Structure based on Motifs

no code implementations4 Oct 2023 Zhenqiao Song, Yunlong Zhao, Yufei Song, Wenxian Shi, Yang Yang, Lei LI

Designing novel proteins with desired functions is crucial in biology and chemistry.

Follow Your Path: a Progressive Method for Knowledge Distillation

no code implementations20 Jul 2021 Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, Lei LI

However, it has been observed that a converged heavy teacher model is strongly constrained for learning a compact student network and could make the optimization subject to poor local optima.

Knowledge Distillation

Learning from deep model via exploring local targets

no code implementations1 Jan 2021 Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, Lei LI

However, it has been observed that a converged heavy teacher model is strongly constrained for learning a compact student network and could make the optimization subject to poor local optima.

Knowledge Distillation

Kernelized Bayesian Softmax for Text Generation

1 code implementation NeurIPS 2019 Ning Miao, Hao Zhou, Chengqi Zhao, Wenxian Shi, Lei LI

Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase.

Sentence Text Generation

Dispersed Exponential Family Mixture VAEs for Interpretable Text Generation

1 code implementation16 Jun 2019 Wenxian Shi, Hao Zhou, Ning Miao, Lei LI

To enhance the controllability and interpretability, one can replace the Gaussian prior with a mixture of Gaussian distributions (GM-VAE), whose mixture components could be related to hidden semantic aspects of data.

Language Modelling Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.