no code implementations • 4 Sep 2024 • Jialong Li, Zhicheng Zhang, Yunwei Chen, Qiqi Lu, Ye Wu, Xiaoming Liu, Qianjin Feng, Yanqiu Feng, Xinyuan Zhang
The former fits DW images from diverse acquisition settings into diffusion tensor field, while the latter applies a deep learning-based denoiser to regularize the diffusion tensor field instead of the DW images, which is free from the limitation of fixed-channel assignment of the network.
1 code implementation • 12 Jun 2024 • Hegan Chen, Jichang Yang, Jia Chen, Songqi Wang, Shaocong Wang, Dingchen Wang, Xinyu Tian, Yifei Yu, Xi Chen, Yinan Lin, Yangu He, Xiaoshan Wu, Xinyuan Zhang, Ning Lin, Meng Xu, Yi Li, Xumeng Zhang, Zhongrui Wang, Han Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
We experimentally validate our approach by developing a digital twin of the HP memristor, which accurately extrapolates its nonlinear dynamics, achieving a 4. 2-fold projected speedup and a 41. 4-fold projected decrease in energy consumption compared to state-of-the-art digital hardware, while maintaining an acceptable error margin.
no code implementations • 12 May 2024 • Xinyuan Zhang, Jiang Liu, Zehui Xiong, Yudong Huang, Gaochang Xie, Ran Zhang
Specifically, with the deployment of the batching technique and model quantization on resource-limited edge devices, we formulate an inference model for transformer decoder-based LLMs.
1 code implementation • 17 Apr 2024 • Xueyuan Gong, Zhiquan Liu, Yain-Whar Si, Xiaochen Yuan, Ke Wang, Xiaoxiang Liu, Cong Lin, Xinyuan Zhang
Computing power has evolved into a foundational and indispensable resource in the area of deep learning, particularly in tasks such as Face Recognition (FR) model training on large-scale datasets, where multiple GPUs are often a necessity.
no code implementations • 15 Apr 2024 • Yifei Yu, Shaocong Wang, Woyu Zhang, Xinyuan Zhang, Xiuzhe Wu, Yangu He, Jichang Yang, Yue Zhang, Ning Lin, Bo wang, Xi Chen, Songqi Wang, Xumeng Zhang, Xiaojuan Qi, Zhongrui Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
The GE harnesses the intrinsic stochasticity of resistive memory for efficient input encoding, while the PE achieves precise weight mapping through a Hardware-Aware Quantization (HAQ) circuit.
1 code implementation • 8 Apr 2024 • Jichang Yang, Hegan Chen, Jia Chen, Songqi Wang, Shaocong Wang, Yifei Yu, Xi Chen, Bo wang, Xinyuan Zhang, Binbin Cui, Ning Lin, Meng Xu, Yi Li, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Xumeng Zhang, Dashan Shang, Han Wang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Demonstrating equivalent generative quality to the software baseline, our system achieved remarkable enhancements in generative speed for both unconditional and conditional generation tasks, by factors of 64. 8 and 156. 5, respectively.
no code implementations • 8 Dec 2023 • Jiamu Xu, Xiaoxiang Liu, Xinyuan Zhang, Yain-Whar Si, Xiaofan Li, Zheng Shi, Ke Wang, Xueyuan Gong
Learning the discriminative features of different faces is an important task in face recognition.
1 code implementation • 19 Oct 2023 • Barrett Martin Lattimer, Patrick Chen, Xinyuan Zhang, Yi Yang
We introduce SCALE (Source Chunking Approach for Large-scale inconsistency Evaluation), a task-agnostic model for detecting factual inconsistencies using a novel chunking strategy.
1 code implementation • 25 Sep 2023 • Marialena Bevilacqua, Kezia Oketch, Ruiyang Qin, Will Stamey, Xinyuan Zhang, Yi Gan, Kai Yang, Ahmed Abbasi
Interestingly, we find that the transformer PLMs tend to score GPT-generated text 10-15\% higher on average, relative to human-authored documents.
no code implementations • 24 Feb 2022 • Fuhui Zhou, Yihao Li, Xinyuan Zhang, Qihui Wu, Xianfu Lei, Rose Qingyang Hu
Semantic communication is envisioned as a promising technique to break through the Shannon limit.
no code implementations • 25 Jan 2022 • Yongqi Tian, Xueyuan Gong, Jialin Tang, Binghua Su, Xiaoxiang Liu, Xinyuan Zhang
To overcome the aforementioned limitations, in this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs).
no code implementations • 4 Dec 2021 • Xueyuan Gong, Yain-Whar Si, Yongqi Tian, Cong Lin, Xinyuan Zhang, Xiaoxiang Liu
Time-series classification approaches based on deep neural networks are easy to be overfitting on UCR datasets, which is caused by the few-shot problem of those datasets.
no code implementations • 25 Dec 2020 • Jie Luo, Xun Li, Xinyuan Zhang, Jiajie Guo, Wei Liu, Yun Lai, Yaohui Zhan, Min Huang
Inverse design of nanoparticles for desired scattering spectra and dynamic switching between the two opposite scattering anomalies, i. e. superscattering and invisibility, is important in realizing cloaking, sensing and functional devices.
Optics
no code implementations • Findings of the Association for Computational Linguistics 2020 • Ruiyi Zhang, Changyou Chen, Xinyuan Zhang, Ke Bai, Lawrence Carin
In sequence-to-sequence models, classical optimal transport (OT) can be applied to semantically match generated sentences with target sentences.
no code implementations • 15 Sep 2020 • Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, Amr Ahmed
High-quality dialogue-summary paired data is expensive to produce and domain-sensitive, making abstractive dialogue summarization a challenging task.
1 code implementation • 5 Oct 2019 • Pengyu Cheng, Yitong Li, Xinyuan Zhang, Liqun Cheng, David Carlson, Lawrence Carin
The relative importance of global versus local structure for the embeddings is learned automatically.
1 code implementation • NeurIPS 2019 • Wenlin Wang, Chenyang Tao, Zhe Gan, Guoyin Wang, Liqun Chen, Xinyuan Zhang, Ruiyi Zhang, Qian Yang, Ricardo Henao, Lawrence Carin
This paper considers a novel variational formulation of network embeddings, with special focus on textual networks.
1 code implementation • ACL 2019 • Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems.
no code implementations • ACL 2019 • Xinyuan Zhang, Yi Yang, Siyang Yuan, Dinghan Shen, Lawrence Carin
We present a syntax-infused variational autoencoder (SIVAE), that integrates sentences with their syntactic trees to improve the grammar of generated sentences.
no code implementations • ACL 2019 • Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, Pengyu Cheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, Lawrence Carin
Constituting highly informative network embeddings is an important tool for network analysis.
no code implementations • EMNLP 2018 • Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin
Network embeddings, which learn low-dimensional representations for each vertex in a large-scale network, have received considerable attention in recent years.
no code implementations • NeurIPS 2018 • Xinyuan Zhang, Yitong Li, Dinghan Shen, Lawrence Carin
Textual network embedding leverages rich text information associated with the network to learn low-dimensional vectorial representations of vertices.
2 code implementations • ACL 2018 • Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin
Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences.
Ranked #11 on Text Classification on DBpedia
no code implementations • CVPR 2018 • Xinyuan Zhang, Xin Yuan, Lawrence Carin
Low-rank signal modeling has been widely leveraged to capture non-local correlation in image processing applications.
no code implementations • 15 Jan 2018 • Xinyuan Zhang, Ricardo Henao, Zhe Gan, Yitong Li, Lawrence Carin
Since diagnoses are typically correlated, a deep residual network is employed on top of the CNN encoder, to capture label (diagnosis) dependencies and incorporate information directly from the encoded sentence vector.