no code implementations • 30 May 2024 • Tanapol Kosolwattana, Huazheng Wang, Raed Al Kontar, Ying Lin
Online learning has demonstrated notable potential to dynamically allocate limited resources to monitor a large population of processes, effectively balancing the exploitation of processes yielding high rewards, and the exploration of uncertain processes.
no code implementations • 25 Mar 2024 • Seokhyun Chung, Raed Al Kontar
Numerical studies on both synthetic and real-world data in reliability engineering highlight the advantageous features of our model in real-time adaptation, enhanced signal prediction with uncertainty quantification, and joint prediction for labels and signals.
no code implementations • 21 Mar 2024 • Naichen Shi, Salar Fattahi, Raed Al Kontar
In this work, we study the problem of common and unique feature extraction from noisy data.
no code implementations • 12 Oct 2023 • Xiaoyang Song, Wenbo Sun, Maher Nouiehed, Raed Al Kontar, Judy Jin
Current techniques for Out-of-Distribution (OoD) detection predominantly rely on quantifying predictive uncertainty and incorporating model regularization during the training phase, using either real or synthetic OoD samples.
no code implementations • 7 Sep 2023 • Jiuyun Hu, Naichen Shi, Raed Al Kontar, Hao Yan
We propose personalized Tucker decomposition (perTucker) to address the limitations of traditional tensor decomposition methods in capturing heterogeneity across different datasets.
1 code implementation • 25 Jun 2023 • Xubo Yue, Raed Al Kontar, Albert S. Berahas, Yang Liu, Blake N. Johnson
Empirically, through simulated datasets and a real-world collaborative sensor design experiment, we show that our framework can effectively accelerate and improve the optimal design process and benefit all participants.
no code implementations • 24 Aug 2022 • Qiyuan Chen, Raed Al Kontar, Maher Nouiehed, Jessie Yang, Corey Lester
This necessitates rethinking cost-sensitive classification in DNNs.
1 code implementation • 17 Jul 2022 • Naichen Shi, Raed Al Kontar
In this paper, we tackle a significant challenge in PCA: heterogeneity.
no code implementations • 15 Jun 2022 • Xubo Yue, Raed Al Kontar, Ana María Estrada Gómez
In this work, we take a step back to develop an FDA treatment for one of the most fundamental statistical models: linear regression.
no code implementations • 16 Mar 2022 • Wenbo Sun, Raed Al Kontar, Judy Jin, Tzyy-Shuh Chang
Machine-vision-based defect classification techniques have been widely adopted for automatic quality inspection in manufacturing processes.
1 code implementation • 28 Nov 2021 • Xubo Yue, Raed Al Kontar
In this paper, we propose \texttt{FGPR}: a Federated Gaussian process ($\mathcal{GP}$) regression framework that uses an averaging strategy for model aggregation and stochastic gradient descent for local client computations.
no code implementations • 19 Nov 2021 • Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti
Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage.
no code implementations • 5 Aug 2021 • Xubo Yue, Maher Nouiehed, Raed Al Kontar
In this paper we propose \texttt{GIFAIR-FL}: a framework that imposes \textbf{G}roup and \textbf{I}ndividual \textbf{FAIR}ness to \textbf{F}ederated \textbf{L}earning settings.
1 code implementation • 21 Jul 2021 • Naichen Shi, Fan Lai, Raed Al Kontar, Mosharaf Chowdhury
In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL).
no code implementations • NeurIPS 2020 • Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti
Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage.
no code implementations • 10 Nov 2020 • Xubo Yue, Maher Nouiehed, Raed Al Kontar
In an effort to improve generalization in deep learning and automate the process of learning rate scheduling, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers.
no code implementations • 28 Sep 2020 • Xubo Yue, Maher Nouiehed, Raed Al Kontar
In an effort to improve generalization in deep learning, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers.
no code implementations • 19 Feb 2020 • Seokhyun Chung, Raed Al Kontar, Zhenke Wu
A fundamental assumption is that the output/group membership labels for all observations are known.
no code implementations • 4 Nov 2019 • Xubo Yue, Raed Al Kontar
We then provide both a theoretical and practical guideline to decide on the rolling horizon stagewise.