Learning latent variable structured prediction models with Gaussian perturbations

NeurIPS 2018  ·  Kevin Bello, Jean Honorio ·

The standard margin-based structured prediction commonly uses a maximum loss over all possible structured outputs. The large-margin formulation including latent variables not only results in a non-convex formulation but also increases the search space by a factor of the size of the latent space. Recent work has proposed the use of the maximum loss over random structured outputs sampled independently from some proposal distribution, with theoretical guarantees. We extend this work by including latent variables. We study a new family of loss functions under Gaussian perturbations and analyze the effect of the latent space on the generalization bounds. We show that the non-convexity of learning with latent variables originates naturally, as it relates to a tight upper bound of the Gibbs decoder distortion with respect to the latent space. Finally, we provide a formulation using random samples that produces a tighter upper bound of the Gibbs decoder distortion up to a statistical accuracy, which enables a faster evaluation of the objective function. We illustrate the method with synthetic experiments and a computer vision application.

PDF Abstract NeurIPS 2018 PDF NeurIPS 2018 Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here