Search Results for author: Siddharth Swaroop

Found 16 papers, 9 papers with code

Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning

no code implementations9 Mar 2024 Zana Buçinca, Siddharth Swaroop, Amanda E. Paluch, Susan A. Murphy, Krzysztof Z. Gajos

Across two experiments (N=316 and N=964), our results demonstrated that people interacting with policies optimized for accuracy achieve significantly better accuracy -- and even human-AI complementarity -- compared to those interacting with any other type of AI support.

Decision Making Offline RL +1

Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks

no code implementations26 Jan 2024 Eura Nofshin, Siddharth Swaroop, Weiwei Pan, Susan Murphy, Finale Doshi-Velez

Many important behavior changes are frictionful; they require individuals to expend effort over a long period with little immediate gratification.

Attribute

Modeling Mobile Health Users as Reinforcement Learning Agents

no code implementations1 Dec 2022 Eura Shin, Siddharth Swaroop, Weiwei Pan, Susan Murphy, Finale Doshi-Velez

Mobile health (mHealth) technologies empower patients to adopt/maintain healthy behaviors in their daily lives, by providing interventions (e. g. push notifications) tailored to the user's needs.

Decision Making reinforcement-learning +1

Differentially private partitioned variational inference

1 code implementation23 Sep 2022 Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela

In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.

Federated Learning Privacy Preserving +1

Collapsed Variational Bounds for Bayesian Neural Networks

1 code implementation NeurIPS 2021 Marcin Tomczak, Siddharth Swaroop, Andrew Foong, Richard Turner

Recent interest in learning large variational Bayesian Neural Networks (BNNs) has been partly hampered by poor predictive performance caused by underfitting, and their performance is known to be very sensitive to the prior over weights.

Variational Inference

Knowledge-Adaptation Priors

1 code implementation NeurIPS 2021 Mohammad Emtiyaz Khan, Siddharth Swaroop

Humans and animals have a natural ability to quickly adapt to their surroundings, but machine-learning models, when subjected to changes, often require a complete retraining from scratch.

Efficient Low Rank Gaussian Variational Inference for Neural Networks

1 code implementation NeurIPS 2020 Marcin Tomczak, Siddharth Swaroop, Richard Turner

Bayesian neural networks are enjoying a renaissance driven in part by recent advances in variational inference (VI).

Variational Inference

Generalized Variational Continual Learning

no code implementations ICLR 2021 Noel Loo, Siddharth Swaroop, Richard E. Turner

One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL).

Continual Learning Variational Inference

Combining Variational Continual Learning with FiLM Layers

no code implementations ICML Workshop LifelongML 2020 Noel Loo, Siddharth Swaroop, Richard E Turner

The standard architecture for continual learning is a multi-headed neural network, which has shared body parameters and task-specific heads.

Continual Learning

Continual Deep Learning by Functional Regularisation of Memorable Past

1 code implementation NeurIPS 2020 Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard E. Turner, Mohammad Emtiyaz Khan

Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past.

Practical Deep Learning with Bayesian Principles

1 code implementation NeurIPS 2019 Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan

Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.

Continual Learning Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.