1 code implementation • 25 Jun 2023 • Xubo Yue, Raed Al Kontar, Albert S. Berahas, Yang Liu, Zhenghao Zai, Kevin Edgar, Blake N. Johnson
Empirically, through simulated datasets and a real-world collaborative material discovery experiment, we show that our framework can effectively accelerate and improve the optimal design process and benefit all participants.
no code implementations • 15 Jun 2022 • Xubo Yue, Raed Al Kontar, Ana María Estrada Gómez
In this work, we take a step back to develop an FDA treatment for one of the most fundamental statistical models: linear regression.
no code implementations • 26 Apr 2022 • Raghav Gnanasambandam, Bo Shen, Jihoon Chung, Xubo Yue, Zhenyu, Kong
To address this, a Self-scalable tanh (Stan) activation function is proposed for the PINNs.
1 code implementation • 28 Nov 2021 • Xubo Yue, Raed Al Kontar
In this paper, we propose \texttt{FGPR}: a Federated Gaussian process ($\mathcal{GP}$) regression framework that uses an averaging strategy for model aggregation and stochastic gradient descent for local client computations.
no code implementations • 9 Nov 2021 • Raed Kontar, Naichen Shi, Xubo Yue, Seokhyun Chung, Eunshin Byon, Mosharaf Chowdhury, Judy Jin, Wissam Kontar, Neda Masoud, Maher Noueihed, Chinedum E. Okwudire, Garvesh Raskutti, Romesh Saigal, Karandeep Singh, Zhisheng Ye
The Internet of Things (IoT) is on the verge of a major paradigm shift.
no code implementations • 5 Aug 2021 • Xubo Yue, Maher Nouiehed, Raed Al Kontar
In this paper we propose \texttt{GIFAIR-FL}: a framework that imposes \textbf{G}roup and \textbf{I}ndividual \textbf{FAIR}ness to \textbf{F}ederated \textbf{L}earning settings.
no code implementations • 10 Nov 2020 • Xubo Yue, Maher Nouiehed, Raed Al Kontar
In an effort to improve generalization in deep learning and automate the process of learning rate scheduling, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers.
no code implementations • 28 Sep 2020 • Xubo Yue, Maher Nouiehed, Raed Al Kontar
In an effort to improve generalization in deep learning, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers.
no code implementations • 4 Nov 2019 • Xubo Yue, Raed Al Kontar
We then provide both a theoretical and practical guideline to decide on the rolling horizon stagewise.
no code implementations • 15 Oct 2019 • Xubo Yue, Raed Kontar
We introduce an alternative closed form lower bound on the Gaussian process ($\mathcal{GP}$) likelihood based on the R\'enyi $\alpha$-divergence.
no code implementations • 9 Mar 2019 • Xubo Yue, Raed Kontar
We present a non-parametric prognostic framework for individualized event prediction based on joint modeling of both longitudinal and time-to-event data.