We propose a dynamical Wasserstein barycentric (DWB) model that estimates the system state over time as well as the data-generating distributions of pure states in an unsupervised manner.
Semi-supervised image classification has shown substantial progress in learning from limited labeled data, but recent advances remain largely untested for clinical applications.
The pixelwise reconstruction error of deep autoencoders is often utilized for image novelty detection and localization under the assumption that pixels with high error indicate which parts of the input image are unfamiliar and therefore likely to be novel.
In this work, we propose a new model, Stochastic Iterative Graph MAtching (SIGMA), to address the graph matching problem.
We develop an Approximate Bayesian Computation approach that draws samples from the posterior distribution over the model's transition and duration parameters given aggregate counts from a specific location, thus adapting the model to a region or individual hospital site of interest.
We consider the problem of forecasting the daily number of hospitalized COVID-19 patients at a single hospital site, in order to help administrators with logistics and planning.
We develop a new framework for learning variational autoencoders and other deep generative models that balances generative and discriminative goals.
Non-parametric and distribution-free two-sample tests have been the foundation of many change point detection algorithms.
Some interactions are attributed to natural selection and involve the enzyme's natural substrates.
Many medical decision-making tasks can be framed as partially observed Markov decision processes (POMDPs).
Two common problems in time series analysis are the decomposition of the data stream into disjoint segments that are each in some sense "homogeneous" - a problem known as Change Point Detection (CPD) - and the grouping of similar nonadjacent segments, a problem that we call Time Series Segment Clustering (TSSC).
Moreover, for situations in which a single, global tree is a poor estimator, we introduce a regional tree regularizer that encourages the deep model to resemble a compact, axis-aligned decision tree in predefined, human-interpretable contexts.
When training clinical prediction models from electronic health records (EHRs), a key concern should be a model's ability to sustain performance over time when deployed, even as care practices, database systems, and population demographics evolve.
Robust machine learning relies on access to data that can be used with standardized frameworks in important tasks and the ability to develop models whose performance can be reasonably reproduced.
Ranked #1 on Length-of-Stay prediction on MIMIC-III
Machine learning for healthcare often trains models on de-identified datasets with randomly-shifted calendar dates, ignoring the fact that data were generated under hospital operation practices that change over time.
Supervisory signals can help topic models discover low-dimensional data representations that are more interpretable for clinical tasks.
The lack of interpretability remains a key barrier to the adoption of deep models in many applications.
Our model is based on a novel, variational interpretation of the popular expected patch log-likelihood (EPLL) method as a model for randomly positioned grids of image patches.
Supervisory signals have the potential to make low-dimensional data representations, like those learned by mixture and topic models, more interpretable and useful.
Neural networks are among the most accurate supervised learning methods in use today, but their opacity makes them difficult to trust in critical applications, especially when conditions in training differ from those in test.
Supervised topic models can help clinical researchers find interpretable cooccurence patterns in count data that are relevant for diagnostics.
Mixture models and topic models generate each observation from a single cluster, but standard variational posteriors for each observation assign positive probability to all possible clusters.
Bayesian nonparametric hidden Markov models are typically learned via fixed truncations of the infinite state space or local Monte Carlo proposals that make small changes to the state space.
Variational inference algorithms provide the most effective framework for large-scale training of Bayesian nonparametric models.
We propose a Bayesian nonparametric approach to the problem of jointly modeling multiple related time series.
Applications of Bayesian nonparametric methods require learning and inference algorithms which efficiently explore models of unbounded complexity.