Search Results for author: Tailin Wu

Found 21 papers, 16 papers with code

Uncertainty Quantification for Forward and Inverse Problems of PDEs via Latent Global Evolution

2 code implementations13 Feb 2024 Tailin Wu, Willie Neiswanger, Hongtao Zheng, Stefano Ermon, Jure Leskovec

Deep learning-based surrogate models have demonstrated remarkable advantages over classical solvers in terms of speed, often achieving speedups of 10 to 1000 times over traditional partial differential equation (PDE) solvers.

Decision Making Uncertainty Quantification

Compositional Generative Inverse Design

1 code implementation24 Jan 2024 Tailin Wu, Takashi Maruyama, Long Wei, Tao Zhang, Yilun Du, Gianluca Iaccarino, Jure Leskovec

In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes that are more complex than those in the training data.

BENO: Boundary-embedded Neural Operators for Elliptic PDEs

1 code implementation17 Jan 2024 Haixin Wang, Jiaxin Li, Anubhav Dwivedi, Kentaro Hara, Tailin Wu

Here we introduce Boundary-Embedded Neural Operators (BENO), a novel neural operator architecture that embeds the complex geometries and inhomogeneous boundary values into the solving of elliptic PDEs.

How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary Investigation

1 code implementation12 Dec 2023 Zhongyi Han, Guanglin Zhou, Rundong He, Jindong Wang, Tailin Wu, Yilong Yin, Salman Khan, Lina Yao, Tongliang Liu, Kun Zhang

We further investigate its adaptability to controlled data perturbations and examine the efficacy of in-context learning as a tool to enhance its adaptation.

Anomaly Detection Autonomous Driving +6

Learning Controllable Adaptive Simulation for Multi-resolution Physics

1 code implementation1 May 2023 Tailin Wu, Takashi Maruyama, Qingqing Zhao, Gordon Wetzstein, Jure Leskovec

In this work, we introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions.

ViRel: Unsupervised Visual Relations Discovery with Graph-level Analogy

no code implementations4 Jul 2022 Daniel Zeng, Tailin Wu, Jure Leskovec

Here, we introduce ViRel, a method for unsupervised discovery and learning of Visual Relations with graph-level analogy.

Relation Relation Classification

ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time

1 code implementation30 Jun 2022 Tailin Wu, Megan Tjandrasuwita, Zhengxuan Wu, Xuelin Yang, Kevin Liu, Rok Sosič, Jure Leskovec

In this work, we introduce Zero-shot Concept Recognition and Acquisition (ZeroC), a neuro-symbolic architecture that can recognize and acquire novel concepts in a zero-shot way.

Novel Concepts

Learning Large-scale Subsurface Simulations with a Hybrid Graph Network Simulator

no code implementations15 Jun 2022 Tailin Wu, Qinchen Wang, Yinan Zhang, Rex Ying, Kaidi Cao, Rok Sosič, Ridwan Jalali, Hassan Hamam, Marko Maucec, Jure Leskovec

To model complex reservoir dynamics at both local and global scale, HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.

Decision Making

Learning to Accelerate Partial Differential Equations via Latent Global Evolution

1 code implementation15 Jun 2022 Tailin Wu, Takashi Maruyama, Jure Leskovec

We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow.

Weather Forecasting

Graph Information Bottleneck

1 code implementation NeurIPS 2020 Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec

We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks.

Representation Learning

AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity

2 code implementations NeurIPS 2020 Silviu-Marian Udrescu, Andrew Tan, Jiahai Feng, Orisvaldo Neto, Tailin Wu, Max Tegmark

We present an improved method for symbolic regression that seeks to fit data to formulas that are Pareto-optimal, in the sense of having the best accuracy for a given complexity.

regression Symbolic Regression +1

Intelligence, physics and information -- the tradeoff between accuracy and simplicity in machine learning

1 code implementation11 Jan 2020 Tailin Wu

Firstly, how can we make the learning models more flexible and efficient, so that agents can learn quickly with fewer examples?

BIG-bench Machine Learning Causal Discovery +3

Discovering Nonlinear Relations with Minimum Predictive Information Regularization

1 code implementation7 Jan 2020 Tailin Wu, Thomas Breuel, Michael Skuhersky, Jan Kautz

Identifying the underlying directional relations from observational time series with nonlinear interactions and complex relational structures is key to a wide range of applications, yet remains a hard problem.

Time Series Time Series Analysis

Phase Transitions for the Information Bottleneck in Representation Learning

no code implementations ICLR 2020 Tailin Wu, Ian Fischer

In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation?

Representation Learning

Pareto-optimal data compression for binary classification tasks

1 code implementation23 Aug 2019 Max Tegmark, Tailin Wu

The goal of lossy data compression is to reduce the storage cost of a data set $X$ while retaining as much information as possible about something ($Y$) that you care about.

Binary Classification Classification +5

Learnability for the Information Bottleneck

no code implementations ICLR Workshop LLD 2019 Tailin Wu, Ian Fischer, Isaac L. Chuang, Max Tegmark

However, in practice, not only is $\beta$ chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between $\beta$, learnability, the intrinsic nature of the dataset and model capacity.

Representation Learning

Neural Causal Discovery with Learnable Input Noise

no code implementations ICLR 2019 Tailin Wu, Thomas Breuel, Jan Kautz

Learning causal relations from observational time series with nonlinear interactions and complex causal structures is a key component of human intelligence, and has a wide range of applications.

Causal Discovery EEG +2

Toward an AI Physicist for Unsupervised Learning

1 code implementation24 Oct 2018 Tailin Wu, Max Tegmark

We investigate opportunities and challenges for improving unsupervised machine learning using four common strategies with a long history in physics: divide-and-conquer, Occam's razor, unification and lifelong learning.

Meta-learning autoencoders for few-shot prediction

1 code implementation26 Jul 2018 Tailin Wu, John Peurifoy, Isaac L. Chuang, Max Tegmark

Compared to humans, machine learning models generally require significantly more training examples and fail to extrapolate from experience to solve previously unseen challenges.

Meta-Learning

Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels

2 code implementations4 May 2017 Curtis G. Northcutt, Tailin Wu, Isaac L. Chuang

To highlight, RP with a CNN classifier can predict if an MNIST digit is a "one"or "not" with only 0. 25% error, and 0. 46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples.

Binary Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.