Search Results for author: Shikai Qiu

Found 4 papers, 4 papers with code

Function-Space Regularization in Neural Networks: A Probabilistic Perspective

1 code implementation28 Dec 2023 Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, Andrew Gordon Wilson

In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.

Should We Learn Most Likely Functions or Parameters?

1 code implementation NeurIPS 2023 Shikai Qiu, Tim G. J. Rudner, Sanyam Kapoor, Andrew Gordon Wilson

Moreover, the most likely parameters under the parameter posterior do not generally correspond to the most likely function induced by the parameter posterior.

Large Language Models Are Zero-Shot Time Series Forecasters

1 code implementation NeurIPS 2023 Nate Gruver, Marc Finzi, Shikai Qiu, Andrew Gordon Wilson

By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text.

Imputation Time Series +1

Simple and Fast Group Robustness by Automatic Feature Reweighting

1 code implementation19 Jun 2023 Shikai Qiu, Andres Potapczynski, Pavel Izmailov, Andrew Gordon Wilson

A major challenge to out-of-distribution generalization is reliance on spurious features -- patterns that are predictive of the class label in the training data distribution, but not causally related to the target.

Out-of-Distribution Generalization

Cannot find the paper you are looking for? You can Submit a new open access paper.