Search Results for author: Raed Al Kontar

Found 19 papers, 4 papers with code

FCOM: A Federated Collaborative Online Monitoring Framework via Representation Learning

no code implementations30 May 2024 Tanapol Kosolwattana, Huazheng Wang, Raed Al Kontar, Ying Lin

Online learning has demonstrated notable potential to dynamically allocate limited resources to monitor a large population of processes, effectively balancing the exploitation of processes yielding high rewards, and the exploration of uncertain processes.

Representation Learning

Real-time Adaptation for Condition Monitoring Signal Prediction using Label-aware Neural Processes

no code implementations25 Mar 2024 Seokhyun Chung, Raed Al Kontar

Numerical studies on both synthetic and real-world data in reliability engineering highlight the advantageous features of our model in real-time adaptation, enhanced signal prediction with uncertainty quantification, and joint prediction for labels and signals.

Uncertainty Quantification

SEE-OoD: Supervised Exploration For Enhanced Out-of-Distribution Detection

no code implementations12 Oct 2023 Xiaoyang Song, Wenbo Sun, Maher Nouiehed, Raed Al Kontar, Judy Jin

Current techniques for Out-of-Distribution (OoD) detection predominantly rely on quantifying predictive uncertainty and incorporating model regularization during the training phase, using either real or synthetic OoD samples.

Data Augmentation Out-of-Distribution Detection +1

Personalized Tucker Decomposition: Modeling Commonality and Peculiarity on Tensor Data

no code implementations7 Sep 2023 Jiuyun Hu, Naichen Shi, Raed Al Kontar, Hao Yan

We propose personalized Tucker decomposition (perTucker) to address the limitations of traditional tensor decomposition methods in capturing heterogeneity across different datasets.

Anomaly Detection Classification +2

Collaborative and Distributed Bayesian Optimization via Consensus: Showcasing the Power of Collaboration for Optimal Design

1 code implementation25 Jun 2023 Xubo Yue, Raed Al Kontar, Albert S. Berahas, Yang Liu, Blake N. Johnson

Empirically, through simulated datasets and a real-world collaborative sensor design experiment, we show that our framework can effectively accelerate and improve the optimal design process and benefit all participants.

Bayesian Optimization

Federated Data Analytics: A Study on Linear Models

no code implementations15 Jun 2022 Xubo Yue, Raed Al Kontar, Ana María Estrada Gómez

In this work, we take a step back to develop an FDA treatment for one of the most fundamental statistical models: linear regression.

Uncertainty Quantification Variable Selection

A Continual Learning Framework for Adaptive Defect Classification and Inspection

no code implementations16 Mar 2022 Wenbo Sun, Raed Al Kontar, Judy Jin, Tzyy-Shuh Chang

Machine-vision-based defect classification techniques have been widely adopted for automatic quality inspection in manufacturing processes.

Classification Continual Learning +2

Federated Gaussian Process: Convergence, Automatic Personalization and Multi-fidelity Modeling

1 code implementation28 Nov 2021 Xubo Yue, Raed Al Kontar

In this paper, we propose \texttt{FGPR}: a Federated Gaussian process ($\mathcal{GP}$) regression framework that uses an averaging strategy for model aggregation and stochastic gradient descent for local client computations.

Privacy Preserving

Gaussian Process Inference Using Mini-batch Stochastic Gradient Descent: Convergence Guarantees and Empirical Benefits

no code implementations19 Nov 2021 Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti

Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage.

GIFAIR-FL: A Framework for Group and Individual Fairness in Federated Learning

no code implementations5 Aug 2021 Xubo Yue, Maher Nouiehed, Raed Al Kontar

In this paper we propose \texttt{GIFAIR-FL}: a framework that imposes \textbf{G}roup and \textbf{I}ndividual \textbf{FAIR}ness to \textbf{F}ederated \textbf{L}earning settings.

Fairness Federated Learning +1

Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning

1 code implementation21 Jul 2021 Naichen Shi, Fan Lai, Raed Al Kontar, Mosharaf Chowdhury

In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL).

Federated Learning

Stochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes

no code implementations NeurIPS 2020 Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti

Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage.

Gaussian Processes

SALR: Sharpness-aware Learning Rate Scheduler for Improved Generalization

no code implementations10 Nov 2020 Xubo Yue, Maher Nouiehed, Raed Al Kontar

In an effort to improve generalization in deep learning and automate the process of learning rate scheduling, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers.

Scheduling

SALR: Sharpness-aware Learning Rates for Improved Generalization

no code implementations28 Sep 2020 Xubo Yue, Maher Nouiehed, Raed Al Kontar

In an effort to improve generalization in deep learning, we propose SALR: a sharpness-aware learning rate update technique designed to recover flat minimizers.

Why Non-myopic Bayesian Optimization is Promising and How Far Should We Look-ahead? A Study via Rollout

no code implementations4 Nov 2019 Xubo Yue, Raed Al Kontar

We then provide both a theoretical and practical guideline to decide on the rolling horizon stagewise.

Bayesian Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.