5 code implementations • 17 Feb 2014 • Tianqi Chen, Emily B. Fox, Carlos Guestrin
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals.
1 code implementation • 2 May 2023 • Jiaxin Shi, Ke Alexander Wang, Emily B. Fox
Popular approaches in the space tradeoff between the memory burden of brute-force enumeration and comparison, as in transformers, the computational burden of complicated sequential dependencies, as in recurrent neural networks, or the parameter burden of convolutional networks with many or large filters.
1 code implementation • 2 Oct 2017 • Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
To do this, the package uses the software library TensorFlow, which has a variety of statistical distributions and mathematical operations as standard, meaning a wide class of models can be built using this framework.
1 code implementation • 22 Oct 2018 • Christopher Aicher, Yi-An Ma, Nicholas J. Foti, Emily B. Fox
However, inference in SSMs is often computationally prohibitive for long time series.
2 code implementations • 29 Jan 2019 • Christopher Aicher, Srshti Putcha, Christopher Nemeth, Paul Fearnhead, Emily B. Fox
We evaluate our proposed particle buffered stochastic gradient using stochastic gradient MCMC for inference on both long sequential synthetic and minute-resolution financial returns data, demonstrating the importance of this class of methods.
1 code implementation • 17 May 2019 • Christopher Aicher, Nicholas J. Foti, Emily B. Fox
Truncated backpropagation through time (TBPTT) is a popular method for learning in recurrent neural networks (RNNs) that saves computation and memory at the cost of bias by truncating backpropagation after a fixed number of lags.
1 code implementation • 6 Dec 2023 • Ke Alexander Wang, Emily B. Fox
Diabetes encompasses a complex landscape of glycemic control that varies widely among individuals.
1 code implementation • 22 Nov 2017 • Alex Tank, Ian Cover, Nicholas J. Foti, Ali Shojaie, Emily B. Fox
A sufficient condition for Granger non-causality in this setting is that all of the outgoing weights of the input data, the past lags of a series, to the first hidden layer are zero.
1 code implementation • 19 Mar 2010 • Emily B. Fox, Erik B. Sudderth, Michael I. Jordan, Alan S. Willsky
Many complex dynamical phenomena can be effectively modeled by a system that switches among a set of conditionally linear dynamical modes.
1 code implementation • 16 Jun 2017 • Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
These methods use a noisy estimate of the gradient of the log posterior, which reduces the per iteration computational cost of the algorithm.
1 code implementation • NeurIPS 2018 • Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
Unfortunately, many popular large-scale Bayesian models, such as network or topic models, require inference on sparse simplex spaces.
no code implementations • 22 Nov 2017 • Alex Tank, Emily B. Fox, Ali Shojaie
We present an efficient alternating direction method of multipliers (ADMM) algorithm for segmenting a multivariate non-stationary time series with structural breaks into stationary regions.
no code implementations • ICML 2017 • Yi-An Ma, Nicholas J. Foti, Emily B. Fox
Stochastic gradient MCMC (SG-MCMC) algorithms have proven useful in scaling Bayesian inference to large datasets under an assumption of i. i. d data.
no code implementations • NeurIPS 2015 • Yi-An Ma, Tianqi Chen, Emily B. Fox
That is, any continuous Markov process that provides samples from the target distribution can be written in our framework.
no code implementations • 5 May 2015 • You Ren, Emily B. Fox, Andrew Bruce
Understanding how housing values evolve over time is important to policy makers, consumers and real estate professionals.
no code implementations • 1 Dec 2014 • Alex Tank, Nicholas J. Foti, Emily B. Fox
In theory, Bayesian nonparametric (BNP) models are well suited to streaming data scenarios due to their ability to adapt model complexity with the observed data.
no code implementations • 6 Jan 2014 • François Caron, Emily B. Fox
We show that for certain choices of such exchangeable random measures underlying our graph construction, our network process is sparse with power-law degree distribution.
no code implementations • 22 Aug 2013 • Emily B. Fox, Michael C. Hughes, Erik B. Sudderth, Michael. I. Jordan
We propose a Bayesian nonparametric approach to the problem of jointly modeling multiple related time series.
no code implementations • NeurIPS 2014 • Nicholas J. Foti, Jason Xu, Dillon Laird, Emily B. Fox
Variational inference algorithms have proven successful for Bayesian analysis in large data settings, with recent advances using stochastic variational inference (SVI).
no code implementations • 27 Feb 2014 • Drausin F. Wulsin, Emily B. Fox, Brian Litt
A goal of our work is to parse these complex epileptic events into distinct dynamic regimes.
no code implementations • 20 Feb 2014 • Raja Hafiz Affandi, Emily B. Fox, Ryan P. Adams, Ben Taskar
Determinantal point processes (DPPs) are well-suited for modeling repulsion and have proven useful in many applications where diversity is desired.
no code implementations • 12 Nov 2013 • Raja Hafiz Affandi, Emily B. Fox, Ben Taskar
Determinantal point processes (DPPs) are random point processes well-suited for modeling repulsion.
no code implementations • 13 Sep 2013 • Emily B. Fox, Michael. I. Jordan
Although much of the literature on mixed membership models considers the setting in which exchangeable collections of data are associated with each member of a set of entities, it is equally natural to consider problems in which an entire time series is viewed as an entity and the goal is to characterize the time series in terms of a set of underlying dynamic attributes or "dynamic regimes".
no code implementations • 24 Jun 2018 • Samuel K. Ainsworth, Nicholas J. Foti, Emily B. Fox
Many problems in machine learning and related application areas are fundamentally variants of conditional modeling and sampling across multi-aspect data, either multi-view, multi-modal, or simply multi-group.
no code implementations • 19 Jul 2018 • Christopher Aicher, Emily B. Fox
We develop a framework for approximating collapsed Gibbs sampling in generative latent variable cluster models.
no code implementations • ICML 2018 • Samuel K. Ainsworth, Nicholas J. Foti, Adrian K. C. Lee, Emily B. Fox
Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data.
no code implementations • 15 May 2009 • Emily B. Fox, Erik B. Sudderth, Michael. I. Jordan, Alan S. Willsky
To address this problem, we take a Bayesian nonparametric approach to speaker diarization that builds on the hierarchical Dirichlet process hidden Markov model (HDP-HMM) of Teh et al. [J. Amer.
no code implementations • 13 Nov 2019 • Jonas Rauber, Emily B. Fox, Leon A. Gatys
The ubiquity of smartphone usage in many people's lives make it a rich source of information about a person's mental and cognitive state.
no code implementations • 30 Nov 2020 • Jeffrey Chan, Andrew C. Miller, Emily B. Fox
In this work, we develop a statistical model to simulate a structured noise process in ECGs derived from a wearable sensor, design a beat-to-beat representation that is conducive for analyzing variation, and devise a factor analysis-based method to denoise the ECG.
no code implementations • 25 Apr 2021 • Andrew C. Miller, Leon A. Gatys, Joseph Futoma, Emily B. Fox
We propose using an evaluation model $-$ a model that describes the conditional distribution of the predictive model score $-$ to form model-based metric (MBM) estimates.
no code implementations • 25 Apr 2021 • Andrew C. Miller, Nicholas J. Foti, Emily B. Fox
And while these categories represent extreme points in model space, modern computational and algorithmic tools enable us to interpolate between these points, producing flexible, interpretable, and scientifically-informed hybrids that can enjoy accurate and robust predictions, and resolve issues with data analysis that Breiman describes, such as the Rashomon effect and Occam's dilemma.
no code implementations • 5 May 2021 • Ali Shojaie, Emily B. Fox
Introduced more than a half century ago, Granger causality has become a popular tool for analyzing time series data in many application domains, from economics and finance to genomics and neuroscience.
no code implementations • 27 Apr 2023 • Ke Alexander Wang, Matthew E. Levine, Jiaxin Shi, Emily B. Fox
In this paper, we propose to learn the effects of macronutrition content from glucose-insulin data and meal covariates.
no code implementations • 27 Feb 2024 • Bob Junyi Zou, Matthew E. Levine, Dessi P. Zaharieva, Ramesh Johari, Emily B. Fox
We encode this information in a causal loss that we combine with the standard predictive loss to arrive at a hybrid loss that biases our learning towards causally valid hybrid models.
no code implementations • 27 Feb 2024 • Michael Y. Li, Emily B. Fox, Noah D. Goodman
We evaluate our method in three common settings in probabilistic modeling: searching within a restricted space of models, searching over an open-ended space, and improving classic models under natural language constraints (e. g., this model should be interpretable to an ecologist).