no code implementations • 20 Nov 2015 • Hristo S. Paskov, John C. Mitchell, Trevor J. Hastie
We also discuss how to derive features from the compressed documents and show that while certain unregularized linear models are invariant to the structure of the compressed dictionary, this structure may be used to regularize learning.
no code implementations • 22 May 2012 • Jason D. Lee, Trevor J. Hastie
We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning.
no code implementations • NeurIPS 2008 • Ping Li, Kenneth W. Church, Trevor J. Hastie
Conditional Random Sampling (CRS) was originally proposed for efficiently computing pairwise ($l_2$, $l_1$) distances, in static, large-scale, and sparse data sets such as text and Web data.