2 code implementations • ACL 2018 • Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin
Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences.
Ranked #11 on Text Classification on DBpedia
1 code implementation • ACL 2019 • Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.
1 code implementation • 19 Oct 2023 • Barrett Martin Lattimer, Patrick Chen, Xinyuan Zhang, Yi Yang
We introduce SCALE (Source Chunking Approach for Large-scale inconsistency Evaluation), a task-agnostic model for detecting factual inconsistencies using a novel chunking strategy.
1 code implementation • 5 Oct 2019 • Pengyu Cheng, Yitong Li, Xinyuan Zhang, Liqun Cheng, David Carlson, Lawrence Carin
The relative importance of global versus local structure for the embeddings is learned automatically.
1 code implementation • NeurIPS 2019 • Wenlin Wang, Chenyang Tao, Zhe Gan, Guoyin Wang, Liqun Chen, Xinyuan Zhang, Ruiyi Zhang, Qian Yang, Ricardo Henao, Lawrence Carin
This paper considers a novel variational formulation of network embeddings, with special focus on textual networks.
1 code implementation • 8 Apr 2024 • Jichang Yang, Hegan Chen, Jia Chen, Songqi Wang, Shaocong Wang, Yifei Yu, Xi Chen, Bo wang, Xinyuan Zhang, Binbin Cui, Ning Lin, Meng Xu, Yi Li, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Xumeng Zhang, Dashan Shang, Han Wang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Demonstrating equivalent generative quality to the software baseline, our system achieved remarkable enhancements in generative speed for both unconditional and conditional generation tasks, by factors of 64. 8 and 156. 5, respectively.
no code implementations • NeurIPS 2018 • Xinyuan Zhang, Yitong Li, Dinghan Shen, Lawrence Carin
Textual network embedding leverages rich text information associated with the network to learn low-dimensional vectorial representations of vertices.
no code implementations • CVPR 2018 • Xinyuan Zhang, Xin Yuan, Lawrence Carin
Low-rank signal modeling has been widely leveraged to capture non-local correlation in image processing applications.
no code implementations • 15 Jan 2018 • Xinyuan Zhang, Ricardo Henao, Zhe Gan, Yitong Li, Lawrence Carin
Since diagnoses are typically correlated, a deep residual network is employed on top of the CNN encoder, to capture label (diagnosis) dependencies and incorporate information directly from the encoded sentence vector.
no code implementations • EMNLP 2018 • Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin
Network embeddings, which learn low-dimensional representations for each vertex in a large-scale network, have received considerable attention in recent years.
no code implementations • ACL 2019 • Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, Pengyu Cheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, Lawrence Carin
Constituting highly informative network embeddings is an important tool for network analysis.
no code implementations • ACL 2019 • Xinyuan Zhang, Yi Yang, Siyang Yuan, Dinghan Shen, Lawrence Carin
We present a syntax-infused variational autoencoder (SIVAE), that integrates sentences with their syntactic trees to improve the grammar of generated sentences.
no code implementations • 15 Sep 2020 • Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, Amr Ahmed
High-quality dialogue-summary paired data is expensive to produce and domain-sensitive, making abstractive dialogue summarization a challenging task.
no code implementations • 25 Dec 2020 • Jie Luo, Xun Li, Xinyuan Zhang, Jiajie Guo, Wei Liu, Yun Lai, Yaohui Zhan, Min Huang
Inverse design of nanoparticles for desired scattering spectra and dynamic switching between the two opposite scattering anomalies, i. e. superscattering and invisibility, is important in realizing cloaking, sensing and functional devices.
Optics
no code implementations • Findings of the Association for Computational Linguistics 2020 • Ruiyi Zhang, Changyou Chen, Xinyuan Zhang, Ke Bai, Lawrence Carin
In sequence-to-sequence models, classical optimal transport (OT) can be applied to semantically match generated sentences with target sentences.
no code implementations • 4 Dec 2021 • Xueyuan Gong, Yain-Whar Si, Yongqi Tian, Cong Lin, Xinyuan Zhang, Xiaoxiang Liu
Time-series classification approaches based on deep neural networks are easy to be overfitting on UCR datasets, which is caused by the few-shot problem of those datasets.
no code implementations • 25 Jan 2022 • Yongqi Tian, Xueyuan Gong, Jialin Tang, Binghua Su, Xiaoxiang Liu, Xinyuan Zhang
To overcome the aforementioned limitations, in this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs).
no code implementations • 24 Feb 2022 • Fuhui Zhou, Yihao Li, Xinyuan Zhang, Qihui Wu, Xianfu Lei, Rose Qingyang Hu
Semantic communication is envisioned as a promising technique to break through the Shannon limit.
1 code implementation • 25 Sep 2023 • Marialena Bevilacqua, Kezia Oketch, Ruiyang Qin, Will Stamey, Xinyuan Zhang, Yi Gan, Kai Yang, Ahmed Abbasi
Interestingly, we find that the transformer PLMs tend to score GPT-generated text 10-15\% higher on average, relative to human-authored documents.
no code implementations • 8 Dec 2023 • Jiamu Xu, Xiaoxiang Liu, Xinyuan Zhang, Yain-Whar Si, Xiaofan Li, Zheng Shi, Ke Wang, Xueyuan Gong
Learning the discriminative features of different faces is an important task in face recognition.
no code implementations • 15 Apr 2024 • Yifei Yu, Shaocong Wang, Woyu Zhang, Xinyuan Zhang, Xiuzhe Wu, Yangu He, Jichang Yang, Yue Zhang, Ning Lin, Bo wang, Xi Chen, Songqi Wang, Xumeng Zhang, Xiaojuan Qi, Zhongrui Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
The GE harnesses the intrinsic stochasticity of resistive memory for efficient input encoding, while the PE achieves precise weight mapping through a Hardware-Aware Quantization (HAQ) circuit.
no code implementations • 17 Apr 2024 • Xueyuan Gong, Yain-Whar Si, Zheng Zhang, Xiaochen Yuan, Ke Wang, Xinyuan Zhang, Cong Lin, Xiaoxiang Liu
MHLR supports large-scale FR training with only one GPU, which is able to accelerate the model to 1/4 of its original training time without sacrificing more than 1% accuracy.