EvoGAN: An Evolutionary Computation Assisted GAN

22 Oct 2021  ·  Feng Liu, HanYang Wang, Jiahao Zhang, Ziwang Fu, Aimin Zhou, Jiayin Qi, Zhibin Li ·

The image synthesis technique is relatively well established which can generate facial images that are indistinguishable even by human beings. However, all of these approaches uses gradients to condition the output, resulting in the outputting the same image with the same input. Also, they can only generate images with basic expression or mimic an expression instead of generating compound expression. In real life, however, human expressions are of great diversity and complexity. In this paper, we propose an evolutionary algorithm (EA) assisted GAN, named EvoGAN, to generate various compound expressions with any accurate target compound expression. EvoGAN uses an EA to search target results in the data distribution learned by GAN. Specifically, we use the Facial Action Coding System (FACS) as the encoding of an EA and use a pre-trained GAN to generate human facial images, and then use a pre-trained classifier to recognize the expression composition of the synthesized images as the fitness function to guide the search of the EA. Combined random searching algorithm, various images with the target expression can be easily sythesized. Quantitative and Qualitative results are presented on several compound expressions, and the experimental results demonstrate the feasibility and the potential of EvoGAN.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here