Numerical Integration
53 papers with code • 0 benchmarks • 0 datasets
Numerical integration is the task to calculate the numerical value of a definite integral or the numerical solution of differential equations.
Benchmarks
These leaderboards are used to track progress in Numerical Integration
Most implemented papers
Bayesian Probabilistic Numerical Integration with Tree-Based Models
The advantages and disadvantages of this new methodology are highlighted on a set of benchmark tests including the Genz functions, and on a Bayesian survey design problem.
Unifying supervised learning and VAEs -- coverage, systematics and goodness-of-fit in normalizing-flow based neural network models for astro-particle reconstructions
The unification motivates an extended supervised learning scheme which allows to calculate a goodness-of-fit p-value for the neural network model.
Hierarchical Deep Learning of Multiscale Differential Equation Time-Steppers
Our multiscale hierarchical time-stepping scheme provides important advantages over current time-stepping algorithms, including (i) circumventing numerical stiffness due to disparate time-scales, (ii) improved accuracy in comparison with leading neural-network architectures, (iii) efficiency in long-time simulation/forecasting due to explicit training of slow time-scale dynamics, and (iv) a flexible framework that is parallelizable and may be integrated with standard numerical time-stepping algorithms.
AutoInt: Automatic Integration for Fast Neural Volume Rendering
For training, we instantiate the computational graph corresponding to the derivative of the network.
Symplectic Adjoint Method for Exact Gradient of Neural ODE with Minimal Memory
The symplectic adjoint method obtains the exact gradient (up to rounding error) with memory proportional to the number of uses plus the network size.
BoXHED2.0: Scalable boosting of dynamic survival analysis
Modern applications of survival analysis increasingly involve time-dependent covariates.
Efficient time stepping for numerical integration using reinforcement learning
While the classical schemes apply very generally and are highly efficient on regular systems, they can behave sub-optimal when an inefficient step rejection mechanism is triggered by structurally complex systems such as chaotic systems.
Learning Nonparametric Volterra Kernels with Gaussian Processes
When the input function to the operator is unobserved and has a GP prior, the NVKM constitutes a powerful method for both single and multiple output regression, and can be viewed as a nonlinear and nonparametric latent force model.
Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models
Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification.
Data-based stochastic modeling reveals sources of activity bursts in single-cell TGF-$β$ signaling
The pathway shows strong heterogeneity at the single-cell level, but quantitative insights into mechanisms underlying fluctuations at various time scales are still missing, partly due to inefficiency in the calibration of stochastic models that mechanistically describe signaling processes.