Search Results for author: Xinyuan Zhang

Found 16 papers, 4 papers with code

Cognitive Semantic Communication Systems Driven by Knowledge Graph

no code implementations24 Feb 2022 Fuhui Zhou, Yihao Li, Xinyuan Zhang, Qihui Wu, Xianfu Lei, Rose Qingyang Hu

Semantic communication is envisioned as a promising technique to break through the Shannon limit.

Data Compression

GIU-GANs: Global Information Utilization for Generative Adversarial Networks

no code implementations25 Jan 2022 Yongqi Tian, Xueyuan Gong, Jialin Tang, Binghua Su, Xiaoxiang Liu, Xinyuan Zhang

To overcome the aforementioned limitations, in this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs).

Image Generation

KDCTime: Knowledge Distillation with Calibration on InceptionTime for Time-series Classification

no code implementations4 Dec 2021 Xueyuan Gong, Yain-Whar Si, Yongqi Tian, Cong Lin, Xinyuan Zhang, Xiaoxiang Liu

Time-series classification approaches based on deep neural networks are easy to be overfitting on UCR datasets, which is caused by the few-shot problem of those datasets.

Knowledge Distillation Time Series +1

Deep-Learning-Enabled Inverse Engineering of Multi-Wavelength Invisibility-to-Superscattering Switching with Phase-Change Materials

no code implementations25 Dec 2020 Jie Luo, Xun Li, Xinyuan Zhang, Jiajie Guo, Wei Liu, Yun Lai, Yaohui Zhan, Min Huang

Inverse design of nanoparticles for desired scattering spectra and dynamic switching between the two opposite scattering anomalies, i. e. superscattering and invisibility, is important in realizing cloaking, sensing and functional devices.


Semantic Matching for Sequence-to-Sequence Learning

no code implementations Findings of the Association for Computational Linguistics 2020 Ruiyi Zhang, Changyou Chen, Xinyuan Zhang, Ke Bai, Lawrence Carin

In sequence-to-sequence models, classical optimal transport (OT) can be applied to semantically match generated sentences with target sentences.

Unsupervised Abstractive Dialogue Summarization for Tete-a-Tetes

no code implementations15 Sep 2020 Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, Amr Ahmed

High-quality dialogue-summary paired data is expensive to produce and domain-sensitive, making abstractive dialogue summarization a challenging task.

Abstractive Dialogue Summarization dialogue summary +1

Learning Compressed Sentence Representations for On-Device Text Processing

1 code implementation ACL 2019 Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin

Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.

Sentence Embeddings

Syntax-Infused Variational Autoencoder for Text Generation

no code implementations ACL 2019 Xinyuan Zhang, Yi Yang, Siyang Yuan, Dinghan Shen, Lawrence Carin

We present a syntax-infused variational autoencoder (SIVAE), that integrates sentences with their syntactic trees to improve the grammar of generated sentences.

Text Generation

Improved Semantic-Aware Network Embedding with Fine-Grained Word Alignment

no code implementations EMNLP 2018 Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin

Network embeddings, which learn low-dimensional representations for each vertex in a large-scale network, have received considerable attention in recent years.

Link Prediction Network Embedding +1

Diffusion Maps for Textual Network Embedding

no code implementations NeurIPS 2018 Xinyuan Zhang, Yitong Li, Dinghan Shen, Lawrence Carin

Textual network embedding leverages rich text information associated with the network to learn low-dimensional vectorial representations of vertices.

General Classification Link Prediction +2

Joint Embedding of Words and Labels for Text Classification

2 code implementations ACL 2018 Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin

Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences.

Classification General Classification +2

Multi-Label Learning from Medical Plain Text with Convolutional Residual Models

no code implementations15 Jan 2018 Xinyuan Zhang, Ricardo Henao, Zhe Gan, Yitong Li, Lawrence Carin

Since diagnoses are typically correlated, a deep residual network is employed on top of the CNN encoder, to capture label (diagnosis) dependencies and incorporate information directly from the encoded sentence vector.

General Classification Multi-Label Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.