Discovering Hidden Factors of Variation in DeepNetworks

Deep learning has enjoyed a great deal of success because of its ability to learnuseful features for tasks such as classification. But there has been less explo-ration in learning the factors of variation apart from the classification signal. Byaugmenting autoencoders with simple regularization terms during training, wedemonstrate that standard deep architectures can discover and explicitly repre-sent factors of variation beyond those relevant for categorization. We introducea cross-covariance penalty (XCov) as a method to disentangle factors like hand-writing style for digits and subject identity in faces. We demonstrate this on theMNIST handwritten digit database, the Toronto Faces Database (TFD) and theMulti-PIE dataset by generating manipulated instances of the data. Furthermore,we demonstrate these deep networks can extrapolate ‘hidden’ variation in the su-pervised signal

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here