Optimization

Neural adjoint method

Introduced by Ren et al. in Benchmarking deep inverse models over time, and the neural-adjoint method

The NA method can be divided into two steps: (i) Training a neural network approximation of f , and (ii) inference of xˆ. Step (i) is conventional and involves training a generic neural network on a dataset ˆ of input/output pairs from the simulator, denoted D, resulting in f, an approximation of the forward ˆ model. This is illustrated in the left inset of Fig 1. In step (ii), our goal is to use ∂f/∂x to help us gradually adjust x so that we achieve a desired output of the forward model, y. This is similar to many classical inverse modeling approaches, such as the popular Adjoint method [8, 9]. For many practical ˆ expression for the simulator, from which it is trivial to compute ∂f/∂x, and furthermore, we can use modern deep learning software packages to efficiently estimate gradients, given a loss function L. More formally, let y be our target output, and let xˆi be our current estimate of the solution, where i indexes each solution we obtain in an iterative gradient-based estimation procedure. Then we compute xˆi+1 with inverse problems, however, obtaining ∂f/∂x requires significant expertise and/or effort, making these approaches challenging. Crucially, fˆ from step (i) provides us with a closed-form differentiable

Source: Benchmarking deep inverse models over time, and the neural-adjoint method

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Robust Design 1 50.00%
Benchmarking 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories