no code implementations • 10 Dec 2023 • Oded Ovadia, Menachem Brief, Moshik Mishaeli, Oren Elisha
Large language models (LLMs) encapsulate a vast amount of factual information within their pre-trained weights, as evidenced by their ability to answer diverse questions across different domains.
no code implementations • 18 Jul 2023 • Oded Ovadia, Vivek Oommen, Adar Kahana, Ahmad Peyvan, Eli Turkel, George Em Karniadakis
The proposed method, named Diffusion-inspired Temporal Transformer Operator (DiTTO), is inspired by latent diffusion models and their conditioning mechanism, which we use to incorporate the temporal evolution of the PDE, in combination with elements from the transformer architecture to improve its capabilities.
no code implementations • 8 Jul 2023 • Maria Luisa Taccari, Oded Ovadia, He Wang, Adar Kahana, Xiaohui Chen, Peter K. Jimack
This paper presents a comprehensive comparison of various machine learning models, namely U-Net, U-Net integrated with Vision Transformers (ViT), and Fourier Neural Operator (FNO), for time-dependent forward modelling in groundwater systems.
no code implementations • 15 Mar 2023 • Oded Ovadia, Adar Kahana, Panos Stinis, Eli Turkel, George Em Karniadakis
We combine vision transformers with operator learning to solve diverse inverse problems described by partial differential equations (PDEs).
no code implementations • 22 May 2022 • Oded Ovadia, Adar Kahana, Eli Turkel
We propose an accurate numerical scheme for approximating the solution of the two dimensional acoustic wave problem.