In most machine learning tasks unambiguous ground truth labels can easily be acquired.
We provide a theoretical argument as to how the regularization is essential to our approach both for the case of single annotator and multiple annotators.
We introduce a new method for interpreting computer vision models: visually perceptible, decision-boundary crossing transformations.
Professional-grade software applications are powerful but complicated$-$expert users can achieve impressive results, but novices often struggle to complete even basic tasks.
The successor map represents the expected future state occupancy from any given state and the reward predictor maps states to scalar rewards.
Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms.
In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere.
We propose the segmented iHMM (siHMM), a hierarchical infinite hidden Markov model (iHMM) that supports a simple, efficient inference scheme.
Models of complex systems are often formalized as sequential software simulators: computationally intensive programs that iteratively build up probable system configurations given parameters and initial conditions.
We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models.
In this note we provide detailed derivations of two versions of small-variance asymptotics for hierarchical Dirichlet process (HDP) mixture models and the HDP hidden Markov model (HDP-HMM, a. k. a.