Search Results for author: Henrik Marklund

Found 8 papers, 4 papers with code

Adaptive Crowdsourcing Via Self-Supervised Learning

no code implementations24 Jan 2024 Anmol Kagrecha, Henrik Marklund, Benjamin Van Roy, Hong Jun Jeon, Richard Zeckhauser

Common crowdsourcing systems average estimates of a latent quantity of interest provided by many crowdworkers to produce a group estimate.

Self-Supervised Learning

Maintaining Plasticity in Continual Learning via Regenerative Regularization

no code implementations23 Aug 2023 Saurabh Kumar, Henrik Marklund, Benjamin Van Roy

In this paper, we propose L2 Init, a simple approach for maintaining plasticity by incorporating in the loss function L2 regularization toward initial parameters.

Continual Learning L2 Regularization

Continual Learning as Computationally Constrained Reinforcement Learning

no code implementations10 Jul 2023 Saurabh Kumar, Henrik Marklund, Ashish Rao, Yifan Zhu, Hong Jun Jeon, Yueyang Liu, Benjamin Van Roy

The design of such agents, which remains a long-standing challenge of artificial intelligence, is addressed by the subject of continual learning.

Continual Learning reinforcement-learning

Extending the WILDS Benchmark for Unsupervised Adaptation

1 code implementation ICLR 2022 Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang

Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well.

Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift

no code implementations28 Sep 2020 Marvin Mengxin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.

Image Classification Meta-Learning

Adaptive Risk Minimization: Learning to Adapt to Domain Shift

3 code implementations NeurIPS 2021 Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.

BIG-bench Machine Learning Domain Generalization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.