Gradient-based deterministic inversion of geophysical data with Generative Adversarial Networks: is it feasible?

21 Dec 2018  ·  Eric Laloy, Niklas Linde, Cyprien Ruffino, Romain Hérault, Gilles Gasso, Diedrik Jacques ·

Global probabilistic inversion within the latent space learned by Generative Adversarial Networks (GAN) has been recently demonstrated (Laloy et al., 2018). Compared to searching through the original model space, using the latent space of a trained GAN can offer the following benefits: (1) the generated model proposals are geostatistically consistent with the prescribed prior training image (TI), and (2) the parameter space is reduced by orders of magnitude compared to the original model space. Nevertheless, exploring the learned latent space by state-of-the-art Markov chain Monte Carlo (MCMC) methods may still require a large computational effort. Instead, this latent space could also be combined with much less computationally expensive gradient-based deterministic inversions. This study shows that due to the highly nonlinear relationship between the latent space and associated output space of a GAN, gradient-based deterministic inversion frequently fails when considering a linear forward model. For a channelized aquifer binary TI and a linear ground penetrating radar (GPR) tomography problem involving 576 measurements with low noise, we observe that only 0\% to 5\% of the gradient-based inversion trials locate a solution that has the required data misfit, with the performance depending on the starting model and the used deterministic inversion approach. In contrast, global optimization by differential evolution always leads to an appropriate solution, though at much larger computational cost.

PDF Abstract