Deblending galaxies with Variational Autoencoders: a joint multi-band, multi-instrument approach

25 May 2020  ·  Bastien Arcelin, Cyrille Doux, Eric Aubourg, Cécile Roucelle ·

The apparent superposition of galaxies with other astrophysical objects along the line of sight, a problem known as blending, will be a major challenge for upcoming, ground-based, deep, photometric galaxy surveys, such as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). Blending contributes to the systematic error budget of weak lensing studies by perturbing object detection and affecting photometric and shape measurements. Existing deblenders suffer from the lack of flexible yet accurate models of galaxy morphologies and therefore rely on assumptions (analytic profiles, symmetry, sparsity in a profile basis) to isolate galaxies within a blended scene. In this paper, we propose instead to use generative models based on deep neural networks, namely variational autoencoders (VAE), to learn a probabilistic model directly from data. Specifically, we train a VAE on images of centred, isolated galaxies, which we reuse in a second VAE-like neural network in charge of deblending galaxies. We train our networks on simulated images created with the GalSim software, including all six LSST bandpass filters as well as the visible and near-infrared bands of the Euclid satellite, as our method naturally generalises to multiple bands and data from multiple instruments. We validate our model and quantify deblending performance by measuring reconstruction errors in galaxy shapes and magnitudes. We obtain median errors on ellipticities between $\pm{0.01}$ and on $r$-band magnitude between $\pm{0.05}$ in most cases and shear multiplicative biases close to $10^{-2}$ in the optimal configuration. Finally, we discuss future challenges about training on real data and obtain encouraging results when applying transfer learning.

PDF Abstract

Categories


Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics