THIN: THrowable Information Networks and Application for Facial Expression Recognition In The Wild

15 Oct 2020  ·  Estephe Arnaud, Arnaud Dapogny, Kevin Bailly ·

For a number of machine learning problems, an exogenous variable can be identified such that it heavily influences the appearance of the different classes, and an ideal classifier should be invariant to this variable. An example of such exogenous variable is identity if facial expression recognition (FER) is considered. In this paper, we propose a dual exogenous/endogenous representation. The former captures the exogenous variable whereas the second one models the task at hand (e.g. facial expression). We design a prediction layer that uses a tree-gated deep ensemble conditioned by the exogenous representation. We also propose an exogenous dispelling loss to remove the exogenous information from the endogenous representation. Thus, the exogenous information is used two times in a throwable fashion, first as a conditioning variable for the target task, and second to create invariance within the endogenous representation. We call this method THIN, standing for THrowable Information Networks. We experimentally validate THIN in several contexts where an exogenous information can be identified, such as digit recognition under large rotations and shape recognition at multiple scales. We also apply it to FER with identity as the exogenous variable. We demonstrate that THIN significantly outperforms state-of-the-art approaches on several challenging datasets.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here