Engineered swift equilibration (ESE) is a class of driving protocols that enforce an instantaneous equilibrium distribution with respect to external control parameters at all times during rapid state transformation of open, classical non-equilibrium systems.
Fitting probabilistic models to data is often difficult, due to the general intractability of the partition function.
Despite the fact that the loss functions of deep neural networks are highly non-convex, gradient-based optimization algorithms converge to approximately the same performance from many random initial points.
For the benefit of designing scalable, fault resistant optical neural networks (ONNs), we investigate the effects architectural designs have on the ONNs' robustness to imprecise components.
Numerically locating the critical points of non-convex surfaces is a long-standing problem central to many fields.
In most sampling algorithms, including Hamiltonian Monte Carlo, transition rates between states correspond to the probability of making a transition in a single time step, and are constrained to be less than or equal to 1.
A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate).