1 code implementation • 6 Dec 2023 • Ke Alexander Wang, Emily B. Fox
Diabetes encompasses a complex landscape of glycemic control that varies widely among individuals.
1 code implementation • 2 May 2023 • Jiaxin Shi, Ke Alexander Wang, Emily B. Fox
Popular approaches in the space tradeoff between the memory burden of brute-force enumeration and comparison, as in transformers, the computational burden of complicated sequential dependencies, as in recurrent neural networks, or the parameter burden of convolutional networks with many or large filters.
no code implementations • 27 Apr 2023 • Ke Alexander Wang, Matthew E. Levine, Jiaxin Shi, Emily B. Fox
In this paper, we propose to learn the effects of macronutrition content from glucose-insulin data and meal covariates.
1 code implementation • ICLR 2022 • Ke Alexander Wang, Niladri S. Chatterji, Saminul Haque, Tatsunori Hashimoto
As a remedy, we show that polynomially-tailed losses restore the effects of importance reweighting in correcting distribution shift in overparameterized models.
no code implementations • 18 Dec 2021 • Ke Alexander Wang, Danielle Maddix, Yuyang Wang
We consider the problem of probabilistic forecasting over categories with graph structure, where the dynamics at a vertex depends on its local connectivity structure.
no code implementations • NeurIPS Workshop ICBINB 2021 • Ke Alexander Wang, Danielle C. Maddix, Bernie Wang
We consider the problem of probabilistic forecasting over categories with graph structure, where the dynamics at a vertex depends on its local connectivity structure.
1 code implementation • 12 Jun 2021 • Sanyam Kapoor, Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson
State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the covariance kernel.
1 code implementation • 19 Apr 2021 • Willie Neiswanger, Ke Alexander Wang, Stefano Ermon
Given such an $\mathcal{A}$, and a prior distribution over $f$, we refer to the problem of inferring the output of $\mathcal{A}$ using $T$ evaluations as Bayesian Algorithm Execution (BAX).
1 code implementation • NeurIPS 2020 • Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson
Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics.
no code implementations • 16 Nov 2019 • Ke Alexander Wang, Xinran Bian, Pan Liu, Donghui Yan
Analysis on $DC^2$ when applied to spectral clustering shows that the loss in clustering accuracy due to data division and reduction is upper bounded by the data approximation error which would vanish with recursive random projections.
3 code implementations • NeurIPS 2019 • Ke Alexander Wang, Geoff Pleiss, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger, Andrew Gordon Wilson
Gaussian processes (GPs) are flexible non-parametric models, with a capacity that grows with the available data.