no code implementations • 9 Aug 2024 • Chuwei Wang, Julius Berner, Zongyi Li, Di Zhou, Jiayun Wang, Jane Bae, Anima Anandkumar
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation by not using a closure model or a coarse-grid solver.
no code implementations • 6 May 2024 • Chengxin Zhao, Hefei Ling, Sijing Xie, Nan Sun, Zongyi Li, Yuxuan Shi, Jiazhong Chen
In addition, we introduce an extra segmentation head to segment the mask of the embedding region during training.
1 code implementation • 19 Mar 2024 • Md Ashiqur Rahman, Robert Joseph George, Mogab Elleithy, Daniel Leibovici, Zongyi Li, Boris Bonev, Colin White, Julius Berner, Raymond A. Yeh, Jean Kossaifi, Kamyar Azizzadenesheli, Anima Anandkumar
On complex downstream tasks with limited data, such as fluid flow simulations and fluid-structure interactions, we found CoDA-NO to outperform existing methods on the few-shot learning task by over $36\%$.
no code implementations • 10 Nov 2023 • Vignesh Gopakumar, Stanislas Pamela, Lorenzo Zanisi, Zongyi Li, Ander Gray, Daniel Brennand, Nitesh Bhatia, Gregory Stathopoulos, Matt Kusner, Marc Peter Deisenroth, Anima Anandkumar, JOREK Team, MAST Team
Predicting plasma evolution within a Tokamak reactor is crucial to realizing the goal of sustainable fusion.
1 code implementation • 17 Oct 2023 • Zongyi Li, Hongbing Lyu, Jun Wang
One of the key designs of U-Net is the use of skip connections between the encoder and decoder, which helps to recover detailed information after upsampling.
no code implementations • 27 Sep 2023 • Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, Anima Anandkumar
Scientific discovery and engineering design are currently limited by the time and cost of physical experiments, selected mostly through trial-and-error and intuition that require deep domain expertise.
no code implementations • 26 Apr 2023 • Samuel Lanthaler, Zongyi Li, Andrew M. Stuart
A popular variant of neural operators is the Fourier neural operator (FNO).
no code implementations • 19 Jan 2023 • Peter I Renn, Cong Wang, Sahin Lale, Zongyi Li, Anima Anandkumar, Morteza Gharib
The learned FNO solution operator can be evaluated in milliseconds, potentially enabling faster-than-real-time modeling for predictive flow control in physical systems.
no code implementations • 29 Nov 2022 • Haydn Maust, Zongyi Li, YiXuan Wang, Daniel Leibovici, Oscar Bruno, Thomas Hou, Anima Anandkumar
The physics-informed neural operator (PINO) is a machine learning architecture that has shown promising empirical results for learning partial differential equations.
no code implementations • 28 Nov 2022 • Yuanyuan Shi, Zongyi Li, Huan Yu, Drew Steeves, Anima Anandkumar, Miroslav Krstic
State estimation is important for a variety of tasks, from forecasting to substituting for unmeasured states in feedback controllers.
no code implementations • 28 Nov 2022 • Robert Joseph George, Jiawei Zhao, Jean Kossaifi, Zongyi Li, Anima Anandkumar
Fourier Neural Operators (FNO) offer a principled approach to solving challenging partial differential equations (PDE) such as turbulent flows.
no code implementations • 31 Oct 2022 • Gege Wen, Zongyi Li, Qirui Long, Kamyar Azizzadenesheli, Anima Anandkumar, Sally M. Benson
Carbon capture and storage (CCS) plays an essential role in global decarbonization.
no code implementations • 28 Oct 2022 • Fengfan Zhou, Hefei Ling, Yuxuan Shi, Jiazhong Chen, Zongyi Li, Ping Li
Though generating hard samples has shown its effectiveness in improving the generalization of models in training tasks, the effectiveness of utilizing this idea to improve the transferability of adversarial face examples remains unexplored.
no code implementations • 27 Oct 2022 • Mingjie Liu, HaoYu Yang, Zongyi Li, Kumara Sastry, Saumyadip Mukhopadhyay, Selim Dogru, Anima Anandkumar, David Z. Pan, Brucek Khailany, Haoxing Ren
These synthetic mask images will augment the original limited training dataset used to finetune the lithography model for improved performance.
6 code implementations • 11 Jul 2022 • Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, Anima Anandkumar
The resulting geo-FNO model has both the computation efficiency of FFT and the flexibility of handling arbitrary geometries.
no code implementations • 8 Jul 2022 • HaoYu Yang, Zongyi Li, Kumara Sastry, Saumyadip Mukhopadhyay, Anima Anandkumar, Brucek Khailany, Vivek Singh, Haoxing Ren
Machine learning techniques have been extensively studied for mask optimization problems, aiming at better mask printability, shorter turnaround time, better mask manufacturability, and so on.
no code implementations • 12 Mar 2022 • HaoYu Yang, Zongyi Li, Kumara Sastry, Saumyadip Mukhopadhyay, Mark Kilgard, Anima Anandkumar, Brucek Khailany, Vivek Singh, Haoxing Ren
Lithography simulation is a critical step in VLSI design and optimization for manufacturability.
5 code implementations • 22 Feb 2022 • Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, Pedram Hassanzadeh, Karthik Kashinath, Animashree Anandkumar
FourCastNet accurately forecasts high-resolution, fast-timescale variables such as the surface wind speed, precipitation, and atmospheric water vapor.
3 code implementations • 24 Nov 2021 • John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro
AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution.
4 code implementations • 6 Nov 2021 • Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, Anima Anandkumar
Specifically, in PINO, we combine coarse-resolution training data with PDE constraints imposed at a higher resolution.
no code implementations • ICLR 2022 • John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, Bryan Catanzaro
AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution.
1 code implementation • 3 Sep 2021 • Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, Sally M. Benson
Here we present U-FNO, a novel neural network architecture for solving multiphase flow problems with superior accuracy, speed, and data efficiency.
1 code implementation • EMNLP 2021 • Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh
Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
1 code implementation • 19 Aug 2021 • Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets.
2 code implementations • 13 Jun 2021 • Zongyi Li, Miguel Liu-Schiaffini, Nikola Kovachki, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
Chaotic systems are notoriously challenging to predict because of their sensitivity to perturbations and errors due to time stepping.
1 code implementation • 2 Mar 2021 • Lihao Wang, Zongyi Li, Xiaoqing Zheng
We present an unsupervised word segmentation model, in which the learning objective is to maximize the generation probability of a sentence given its all possible segmentation.
19 code implementations • ICLR 2021 • Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces.
no code implementations • 16 Sep 2020 • Zongyi Li, Xiaoqing Zheng, Jun He
We present RepRank, an unsupervised graph-based ranking model for extractive multi-document summarization in which the similarity between words, sentences, and word-to-sentence can be estimated by the distances between their vector representations in a unified vector space.
4 code implementations • NeurIPS 2020 • Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
One of the main challenges in using deep learning-based methods for simulating physical systems and solving partial differential equations (PDEs) is formulating physics-based data in the desired structure for neural networks.
6 code implementations • ICLR Workshop DeepDiffEq 2019 • Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces.
1 code implementation • 6 Jun 2018 • Diego Calderon, Brendan Juba, Sirui Li, Zongyi Li, Lisa Ruan
Work in machine learning and statistics commonly focuses on building models that capture the vast majority of data, possibly ignoring a segment of the population as outliers.
no code implementations • 13 Nov 2017 • Brendan Juba, Zongyi Li, Evan Miller
The main shortcoming of this formulation of the task is that it assumes access to full-information (i. e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information.