Search Results for author: Tailin Wu

Found 15 papers, 8 papers with code

ViRel: Unsupervised Visual Relations Discovery with Graph-level Analogy

no code implementations4 Jul 2022 Daniel Zeng, Tailin Wu, Jure Leskovec

Here, we introduce ViRel, a method for unsupervised discovery and learning of Visual Relations with graph-level analogy.

Relation Classification

ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time

1 code implementation30 Jun 2022 Tailin Wu, Megan Tjandrasuwita, Zhengxuan Wu, Xuelin Yang, Kevin Liu, Rok Sosič, Jure Leskovec

In this work, we introduce Zero-shot Concept Recognition and Acquisition (ZeroC), a neuro-symbolic architecture that can recognize and acquire novel concepts in a zero-shot way.

Novel Concepts

Learning Large-scale Subsurface Simulations with a Hybrid Graph Network Simulator

no code implementations15 Jun 2022 Tailin Wu, Qinchen Wang, Yinan Zhang, Rex Ying, Kaidi Cao, Rok Sosič, Ridwan Jalali, Hassan Hamam, Marko Maucec, Jure Leskovec

To model complex reservoir dynamics at both local and global scale, HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.

Decision Making

Learning to Accelerate Partial Differential Equations via Latent Global Evolution

no code implementations15 Jun 2022 Tailin Wu, Takashi Maruyama, Jure Leskovec

We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow.

Weather Forecasting

Graph Information Bottleneck

1 code implementation NeurIPS 2020 Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec

We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks.

Representation Learning

AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity

2 code implementations NeurIPS 2020 Silviu-Marian Udrescu, Andrew Tan, Jiahai Feng, Orisvaldo Neto, Tailin Wu, Max Tegmark

We present an improved method for symbolic regression that seeks to fit data to formulas that are Pareto-optimal, in the sense of having the best accuracy for a given complexity.

Symbolic Regression Two-sample testing

Intelligence, physics and information -- the tradeoff between accuracy and simplicity in machine learning

no code implementations11 Jan 2020 Tailin Wu

Firstly, how can we make the learning models more flexible and efficient, so that agents can learn quickly with fewer examples?

BIG-bench Machine Learning Causal Discovery +3

Phase Transitions for the Information Bottleneck in Representation Learning

no code implementations ICLR 2020 Tailin Wu, Ian Fischer

In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation?

Representation Learning

Discovering Nonlinear Relations with Minimum Predictive Information Regularization

1 code implementation7 Jan 2020 Tailin Wu, Thomas Breuel, Michael Skuhersky, Jan Kautz

Identifying the underlying directional relations from observational time series with nonlinear interactions and complex relational structures is key to a wide range of applications, yet remains a hard problem.

Time Series

Pareto-optimal data compression for binary classification tasks

1 code implementation23 Aug 2019 Max Tegmark, Tailin Wu

The goal of lossy data compression is to reduce the storage cost of a data set $X$ while retaining as much information as possible about something ($Y$) that you care about.

Classification Data Compression +3

Learnability for the Information Bottleneck

no code implementations ICLR Workshop LLD 2019 Tailin Wu, Ian Fischer, Isaac L. Chuang, Max Tegmark

However, in practice, not only is $\beta$ chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between $\beta$, learnability, the intrinsic nature of the dataset and model capacity.

Representation Learning

Neural Causal Discovery with Learnable Input Noise

no code implementations ICLR 2019 Tailin Wu, Thomas Breuel, Jan Kautz

Learning causal relations from observational time series with nonlinear interactions and complex causal structures is a key component of human intelligence, and has a wide range of applications.

Causal Discovery EEG +1

Toward an AI Physicist for Unsupervised Learning

1 code implementation24 Oct 2018 Tailin Wu, Max Tegmark

We investigate opportunities and challenges for improving unsupervised machine learning using four common strategies with a long history in physics: divide-and-conquer, Occam's razor, unification and lifelong learning.

Meta-learning autoencoders for few-shot prediction

1 code implementation26 Jul 2018 Tailin Wu, John Peurifoy, Isaac L. Chuang, Max Tegmark

Compared to humans, machine learning models generally require significantly more training examples and fail to extrapolate from experience to solve previously unseen challenges.

Meta-Learning

Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels

2 code implementations4 May 2017 Curtis G. Northcutt, Tailin Wu, Isaac L. Chuang

To highlight, RP with a CNN classifier can predict if an MNIST digit is a "one"or "not" with only 0. 25% error, and 0. 46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples.

General Classification Noise Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.