Search Results for author: Yang Xie

Found 8 papers, 0 papers with code

Large language models enabled multiagent ensemble method for efficient EHR data labeling

no code implementations21 Oct 2024 Jingwei Huang, Kuroush Nezafati, Ismael Villanueva-Miranda, Zifan Gu, Ann Marie Navar, Tingyi Wanyan, Qin Zhou, Bo Yao, Ruichen Rong, Xiaowei Zhan, Guanghua Xiao, Eric D. Peterson, Donghan M. Yang, Yang Xie

To overcome this bottleneck, we developed an ensemble LLMs method and demonstrated its effectiveness in two real-world tasks: (1) labeling a large-scale unlabeled ECG dataset in MIMIC-IV; (2) identifying social determinants of health (SDOH) from the clinical notes of EHR.

Hallucination

Causal Effect Estimation using identifiable Variational AutoEncoder with Latent Confounders and Post-Treatment Variables

no code implementations13 Aug 2024 Yang Xie, Ziqi Xu, Debo Cheng, Jiuyong Li, Lin Liu, Yinghao Zhang, Zaiwen Feng

In this paper, we propose a novel method of joint Variational AutoEncoder (VAE) and identifiable Variational AutoEncoder (iVAE) for learning the representations of latent confounders and latent post-treatment variables from their proxy variables, termed CPTiVAE, to achieve unbiased causal effect estimation from observational data.

Disentangled Latent Representation Learning for Tackling the Confounding M-Bias Problem in Causal Inference

no code implementations8 Dec 2023 Debo Cheng, Yang Xie, Ziqi Xu, Jiuyong Li, Lin Liu, Jixue Liu, Yinghao Zhang, Zaiwen Feng

To address this problem with co-occurring M-bias and confounding bias, we propose a novel Disentangled Latent Representation learning framework for learning latent representations from proxy variables for unbiased Causal effect Estimation (DLRCE) from observational data.

Causal Inference Representation Learning

HBert + BiasCorp -- Fighting Racism on the Web

no code implementations6 Apr 2021 Olawale Onabola, Zhuang Ma, Yang Xie, Benjamin Akera, Abdulrahman Ibraheem, Jia Xue, Dianbo Liu, Yoshua Bengio

In this work, we present hBERT, where we modify certain layers of the pretrained BERT model with the new Hopfield Layer.

Robust and Attack Resilient Logic Locking with a High Application-Level Impact

no code implementations7 Jan 2021 Yuntao Liu, Michael Zuzak, Yang Xie, Abhishek Chakraborty, Ankur Srivastava

(3) Our experiments show that SAS and RSAS exhibit better SAT resilience than SFLL and have similar effectiveness.

Cryptography and Security Hardware Architecture Formal Languages and Automata Theory

Neural Trojans

no code implementations3 Oct 2017 Yuntao Liu, Yang Xie, Ankur Srivastava

In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.