no code implementations • 28 Jan 2025 • Siddharth Swaroop, Mohammad Emtiyaz Khan, Finale Doshi-Velez
We provide new connections between two distinct federated learning approaches based on (i) ADMM and (ii) Variational Bayes (VB), and propose new variants by combining their complementary strengths.
no code implementations • 5 Oct 2024 • Zana Buçinca, Siddharth Swaroop, Amanda E. Paluch, Finale Doshi-Velez, Krzysztof Z. Gajos
People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision-support, even when the AI provides informative explanations.
no code implementations • 9 Mar 2024 • Zana Buçinca, Siddharth Swaroop, Amanda E. Paluch, Susan A. Murphy, Krzysztof Z. Gajos
Across two experiments (N=316 and N=964), our results demonstrated that people interacting with policies optimized for accuracy achieve significantly better accuracy -- and even human-AI complementarity -- compared to those interacting with any other type of AI support.
no code implementations • 26 Jan 2024 • Eura Nofshin, Siddharth Swaroop, Weiwei Pan, Susan Murphy, Finale Doshi-Velez
Many important behavior changes are frictionful; they require individuals to expend effort over a long period with little immediate gratification.
no code implementations • 12 Jun 2023 • Siddharth Swaroop, Zana Buçinca, Krzysztof Z. Gajos, Finale Doshi-Velez
The precise benefit can depend on both the user and task.
no code implementations • 1 Dec 2022 • Eura Shin, Siddharth Swaroop, Weiwei Pan, Susan Murphy, Finale Doshi-Velez
Mobile health (mHealth) technologies empower patients to adopt/maintain healthy behaviors in their daily lives, by providing interventions (e. g. push notifications) tailored to the user's needs.
1 code implementation • 23 Sep 2022 • Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela
In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.
1 code implementation • 24 Feb 2022 • Matthew Ashman, Thang D. Bui, Cuong V. Nguyen, Stratis Markou, Adrian Weller, Siddharth Swaroop, Richard E. Turner
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models.
1 code implementation • NeurIPS 2021 • Marcin Tomczak, Siddharth Swaroop, Andrew Foong, Richard Turner
Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights.
1 code implementation • NeurIPS 2021 • Mohammad Emtiyaz Khan, Siddharth Swaroop
Humans and animals have a natural ability to quickly adapt to their surroundings, but machine-learning models, when subjected to changes, often require a complete retraining from scratch.
1 code implementation • NeurIPS 2020 • Marcin Tomczak, Siddharth Swaroop, Richard Turner
Bayesian neural networks are enjoying a renaissance driven in part by recent advances in variational inference (VI).
no code implementations • ICLR 2021 • Noel Loo, Siddharth Swaroop, Richard E. Turner
One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL).
no code implementations • ICML Workshop LifelongML 2020 • Noel Loo, Siddharth Swaroop, Richard E Turner
The standard architecture for continual learning is a multi-headed neural network, which has shared body parameters and task-specific heads.
1 code implementation • NeurIPS 2020 • Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard E. Turner, Mohammad Emtiyaz Khan
Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past.
1 code implementation • 24 Nov 2019 • Mrinank Sharma, Michael Hutchinson, Siddharth Swaroop, Antti Honkela, Richard E. Turner
This setting is known as federated learning, in which privacy is a key concern.
1 code implementation • NeurIPS 2019 • Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan
Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.
1 code implementation • 6 May 2019 • Siddharth Swaroop, Cuong V. Nguyen, Thang D. Bui, Richard E. Turner
In the continual learning setting, tasks are encountered sequentially.
no code implementations • 27 Nov 2018 • Thang D. Bui, Cuong V. Nguyen, Siddharth Swaroop, Richard E. Turner
Second, the granularity of the updates e. g. whether the updates are local to each data point and employ message passing or global.