no code implementations • 21 Nov 2023 • Shyam Venkatasubramanian, Ahmed Aloui, Vahid Tarokh
Advancing loss function design is pivotal for optimizing neural network training and performance.
no code implementations • 7 Nov 2023 • Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh
To address this, we introduce a model-agnostic data augmentation method that imputes the counterfactual outcomes for a selected subset of individuals.
no code implementations • 20 Jun 2023 • Ahmed Aloui, Ali Hasan, Yuting Ng, Miroslav Pajic, Vahid Tarokh
Understanding individual treatment effects in extreme regimes is important for characterizing risks associated with different interventions.
no code implementations • 13 Jun 2023 • Ziyang Jiang, Yiling Liu, Michael H. Klein, Ahmed Aloui, Yiman Ren, Keyu Li, Vahid Tarokh, David Carlson
This is important in many scientific applications to identify the underlying mechanisms of a treatment effect.
no code implementations • 19 May 2023 • Cat P. Le, Juncheng Dong, Ahmed Aloui, Vahid Tarokh
To this end, we introduce a new continual learning approach for conditional generative adversarial networks by leveraging a mode-affinity score specifically designed for generative modeling.
no code implementations • 3 Feb 2023 • Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Hunter Klein, Vahid Tarokh, David Carlson
To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
no code implementations • 1 Oct 2022 • Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh
To this end, we theoretically assess the feasibility of transferring ITE knowledge and present a practical framework for efficient transfer.