Search Results for author: Tim Z. Xiao

Found 13 papers, 4 papers with code

Can Large Language Models Understand Symbolic Graphics Programs?

no code implementations15 Aug 2024 Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M. Collins, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf

While LLMs exhibit impressive skills in general program synthesis and analysis, symbolic graphics programs offer a new layer of evaluation: they allow us to test an LLM's ability to answer different-grained semantic-level questions of the images or 3D geometries without a vision encoder.

Instruction Following Program Synthesis

Verbalized Machine Learning: Revisiting Machine Learning with Language Models

no code implementations6 Jun 2024 Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu

Motivated by the progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML).

Inductive Bias

Your Finetuned Large Language Model is Already a Powerful Out-of-distribution Detector

no code implementations7 Apr 2024 Andi Zhang, Tim Z. Xiao, Weiyang Liu, Robert Bamler, Damon Wischik

We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection.

Language Modelling Large Language Model +3

A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry

1 code implementation31 Dec 2023 Tim Z. Xiao, Weiyang Liu, Robert Bamler

Bayesian neural networks (BNNs) are a principled approach to modeling predictive uncertainties in deep learning, which are important in safety-critical applications.

Bayesian Inference Variational Inference

The SVHN Dataset Is Deceptive for Probabilistic Generative Models Due to a Distribution Mismatch

no code implementations30 Oct 2023 Tim Z. Xiao, Johannes Zenn, Robert Bamler

However, with this work, we aim to warn the community about an issue of the SVHN dataset as a benchmark for generative modeling tasks: we discover that the official split into training set and test set of the SVHN dataset are not drawn from the same distribution.

Classification

Upgrading VAE Training With Unlimited Data Plans Provided by Diffusion Models

no code implementations30 Oct 2023 Tim Z. Xiao, Johannes Zenn, Robert Bamler

Variational autoencoders (VAEs) are popular models for representation learning but their encoders are susceptible to overfitting (Cremer et al., 2018) because they are trained on a finite training set instead of the true (continuous) data distribution $p_{\mathrm{data}}(\mathbf{x})$.

Data Augmentation Representation Learning

Trading Information between Latents in Hierarchical Variational Autoencoders

1 code implementation9 Feb 2023 Tim Z. Xiao, Robert Bamler

Variational Autoencoders (VAEs) were originally motivated (Kingma & Welling, 2014) as probabilistic generative models in which one performs approximate Bayesian inference.

Bayesian Inference Data Compression +1

Iterative Teaching by Data Hallucination

1 code implementation31 Oct 2022 Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf

We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.

Hallucination

Out-of-Distribution Detection with Class Ratio Estimation

no code implementations8 Jun 2022 Mingtian Zhang, Andi Zhang, Tim Z. Xiao, Yitong Sun, Steven McDonagh

In this work, we propose to unify density ratio based methods under a novel framework that builds energy-based models and employs differing base distributions.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Improving VAE-based Representation Learning

no code implementations28 May 2022 Mingtian Zhang, Tim Z. Xiao, Brooks Paige, David Barber

Latent variable models like the Variational Auto-Encoder (VAE) are commonly used to learn representations of images.

Decoder Representation Learning

Wat zei je? Detecting Out-of-Distribution Translations with Variational Transformers

1 code implementation8 Jun 2020 Tim Z. Xiao, Aidan N. Gomez, Yarin Gal

We detect out-of-training-distribution sentences in Neural Machine Translation using the Bayesian Deep Learning equivalent of Transformer models.

Machine Translation Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.