no code implementations • EMNLP (sdp) 2020 • Lei LI, Yang Xie, Wei Liu, Yinan Liu, Yafei Jiang, Siya Qi, Xingyuan Li
In the LongSumm shared task, we integrate both the extractive and abstractive summarization ways.
no code implementations • 21 Oct 2024 • Jingwei Huang, Kuroush Nezafati, Ismael Villanueva-Miranda, Zifan Gu, Ann Marie Navar, Tingyi Wanyan, Qin Zhou, Bo Yao, Ruichen Rong, Xiaowei Zhan, Guanghua Xiao, Eric D. Peterson, Donghan M. Yang, Yang Xie
To overcome this bottleneck, we developed an ensemble LLMs method and demonstrated its effectiveness in two real-world tasks: (1) labeling a large-scale unlabeled ECG dataset in MIMIC-IV; (2) identifying social determinants of health (SDOH) from the clinical notes of EHR.
no code implementations • 13 Aug 2024 • Yang Xie, Ziqi Xu, Debo Cheng, Jiuyong Li, Lin Liu, Yinghao Zhang, Zaiwen Feng
In this paper, we propose a novel method of joint Variational AutoEncoder (VAE) and identifiable Variational AutoEncoder (iVAE) for learning the representations of latent confounders and latent post-treatment variables from their proxy variables, termed CPTiVAE, to achieve unbiased causal effect estimation from observational data.
no code implementations • 8 Dec 2023 • Debo Cheng, Yang Xie, Ziqi Xu, Jiuyong Li, Lin Liu, Jixue Liu, Yinghao Zhang, Zaiwen Feng
To address this problem with co-occurring M-bias and confounding bias, we propose a novel Disentangled Latent Representation learning framework for learning latent representations from proxy variables for unbiased Causal effect Estimation (DLRCE) from observational data.
no code implementations • 31 May 2023 • Wenting Ye, Chen Li, Yang Xie, Wen Zhang, Hong-Yu Zhang, Bowen Wang, Debo Cheng, Zaiwen Feng
Identifying and discovering drug-target interactions(DTIs) are vital steps in drug discovery and development.
no code implementations • 6 Apr 2021 • Olawale Onabola, Zhuang Ma, Yang Xie, Benjamin Akera, Abdulrahman Ibraheem, Jia Xue, Dianbo Liu, Yoshua Bengio
In this work, we present hBERT, where we modify certain layers of the pretrained BERT model with the new Hopfield Layer.
no code implementations • 7 Jan 2021 • Yuntao Liu, Michael Zuzak, Yang Xie, Abhishek Chakraborty, Ankur Srivastava
(3) Our experiments show that SAS and RSAS exhibit better SAT resilience than SFLL and have similar effectiveness.
Cryptography and Security Hardware Architecture Formal Languages and Automata Theory
no code implementations • 3 Oct 2017 • Yuntao Liu, Yang Xie, Ankur Srivastava
In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained.
Cryptography and Security