no code implementations • 12 Feb 2024 • Yuxiao Wen, Arthur Jacot
We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs.
no code implementations • 12 Feb 2024 • Yuxiao Wen, Yanjun Han, Zhengyuan Zhou
Interestingly, $\beta_M(G)$ interpolates between $\alpha(G)$ (the independence number of the graph) and $\mathsf{m}(G)$ (the maximum acyclic subgraph (MAS) number of the graph) as the number of contexts $M$ varies.
no code implementations • 27 Jun 2023 • Yuxiao Wen, Eric Vanden-Eijnden, Benjamin Peherstorfer
Training nonlinear parametrizations such as deep neural networks to numerically approximate solutions of partial differential equations is often based on minimizing a loss that includes the residual, which is analytically available in limited settings only.
no code implementations • 20 Jul 2021 • Wayne Isaac Tan Uy, Yuepeng Wang, Yuxiao Wen, Benjamin Peherstorfer
Furthermore, the connection between operator inference and projection-based model reduction enables bounding the mean-squared errors of predictions made with the learned models with respect to traditional reduced models.