no code implementations • 11 Dec 2023 • Easton K. Huch, Jieru Shi, Madeline R. Abbott, Jessica R. Golbus, Alexander Moreno, Walter H. Dempsey
In addition, these approaches did not explicitly model the baseline reward, which limited the ability to precisely estimate the parameters in the differential reward model.
1 code implementation • 1 Nov 2023 • Maxwell A. Xu, Alexander Moreno, Hui Wei, Benjamin M. Marlin, James M. Rehg
The success of self-supervised contrastive learning hinges on identifying positive data pairs, such that when they are pushed together in embedding space, the space encodes useful information for subsequent downstream tasks.
no code implementations • 30 May 2023 • Jonathan Mei, Alexander Moreno, Luke Walters
Second order stochastic optimizers allow parameter update step size and direction to adapt to loss curvature, but have traditionally required too much memory and compute for deep learning.
no code implementations • 15 May 2023 • Alexander Moreno, Jonathan Mei, Luke Walters
For the low rank component, we replace the RPE MLP with linear interpolation and use asymmetric Structured Kernel Interpolation (SKI) (Wilson et.
1 code implementation • 14 Dec 2022 • Maxwell A. Xu, Alexander Moreno, Supriya Nagesh, V. Burak Aydemir, David W. Wetter, Santosh Kumar, James M. Rehg
The promise of Mobile Health (mHealth) is the ability to use wearable sensors to monitor participant physiology at high frequencies during daily life to enable temporally-precise health interventions.
no code implementations • 1 Nov 2021 • Alexander Moreno, Supriya Nagesh, Zhenke Wu, Walter Dempsey, James M. Rehg
Theoretically, we show new existence results for both kernel exponential and deformed exponential families, and that the deformed case has similar approximation capabilities to kernel exponential families.
no code implementations • 1 Nov 2021 • Supriya Nagesh, Alexander Moreno, Stephanie M. Carpenter, Jamie Yap, Soujanya Chatterjee, Steven Lloyd Lizotte, Neng Wan, Santosh Kumar, Cho Lam, David W. Wetter, Inbal Nahum-Shani, James M. Rehg
The transformer model achieves a non-response prediction AUC of 0. 77 and is significantly better than classical ML and LSTM-based deep learning models.
no code implementations • 26 Oct 2021 • Yu-Ying Liu, Alexander Moreno, Maxwell A. Xu, Shuang Li, Jena C. McDaniel, Nancy C. Brady, Agata Rozga, Fuxin Li, Le Song, James M. Rehg
We solve the first challenge by reformulating the estimation problem as an equivalent discrete time-inhomogeneous hidden Markov model.
no code implementations • NeurIPS 2020 • Alexander Moreno, Zhenke Wu, Jamie Yap, David Wetter, Cho Lam, Inbal Nahum-Shani, Walter Dempsey, James M. Rehg
Panel count data describes aggregated counts of recurrent events observed at discrete time points.
no code implementations • ICML 2017 • Walter H. Dempsey, Alexander Moreno, Christy K. Scott, Michael L. Dennis, David H. Gustafson, Susan A. Murphy, James M. Rehg
We present a parameter learning method for GLM emissions and survival model fitting, and present promising results on both synthetic data and an mHealth drug use dataset.
no code implementations • 28 Jun 2016 • Alexander Moreno, Tameem Adel, Edward Meeds, James M. Rehg, Max Welling
Approximate Bayesian Computation (ABC) is a framework for performing likelihood-free posterior inference for simulation models.