34 papers with code • 0 benchmarks • 0 datasets
Learn an operator between infinite dimensional Hilbert spaces or Banach spaces
These leaderboards are used to track progress in Operator learning
Most implemented papers
Convolutional Analysis Operator Learning: Acceleration and Convergence
This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems.
Convolutional Analysis Operator Learning: Dependence on Training Data
Convolutional analysis operator learning (CAOL) enables the unsupervised training of (hierarchical) convolutional sparsifying operators or autoencoders from large datasets.
Physics-Informed Neural Operator for Learning Partial Differential Equations
This is because PINO learns the solution operator by optimizing PDE constraints on multiple instances while PINN optimizes PDE constraints of a single PDE instance.
Importance Weight Estimation and Generalization in Domain Adaptation under Label Shift
We deploy these estimators and provide generalization bounds in the unlabeled target domain.
Learning Symbolic Operators for Task and Motion Planning
We then propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.
Choose a Transformer: Fourier or Galerkin
Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length.
Neural Operator: Learning Maps Between Function Spaces
The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets.
Multiwavelet-based Operator Learning for Differential Equations
The solution of a partial differential equation can be obtained by computing the inverse operator map between the input and the solution space.
Improved architectures and training algorithms for deep operator networks
In this work we analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory, and reveal a bias that favors the approximation of functions with larger magnitudes.
Fast PDE-constrained optimization via self-supervised operator learning
Design and optimal control problems are among the fundamental, ubiquitous tasks we face in science and engineering.