no code implementations • 15 Aug 2024 • Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M. Collins, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf
While LLMs exhibit impressive skills in general program synthesis and analysis, symbolic graphics programs offer a new layer of evaluation: they allow us to test an LLM's ability to answer different-grained semantic-level questions of the images or 3D geometries without a vision encoder.
no code implementations • 16 Jun 2024 • Zijing Ou, Mingtian Zhang, Andi Zhang, Tim Z. Xiao, Yingzhen Li, David Barber
The probabilistic diffusion model has become highly effective across various domains.
no code implementations • 6 Jun 2024 • Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu
Motivated by the progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML).
no code implementations • 7 Apr 2024 • Andi Zhang, Tim Z. Xiao, Weiyang Liu, Robert Bamler, Damon Wischik
We revisit the likelihood ratio between a pretrained large language model (LLM) and its finetuned variant as a criterion for out-of-distribution (OOD) detection.
1 code implementation • 31 Dec 2023 • Tim Z. Xiao, Weiyang Liu, Robert Bamler
Bayesian neural networks (BNNs) are a principled approach to modeling predictive uncertainties in deep learning, which are important in safety-critical applications.
no code implementations • 30 Oct 2023 • Tim Z. Xiao, Johannes Zenn, Robert Bamler
However, with this work, we aim to warn the community about an issue of the SVHN dataset as a benchmark for generative modeling tasks: we discover that the official split into training set and test set of the SVHN dataset are not drawn from the same distribution.
no code implementations • 30 Oct 2023 • Tim Z. Xiao, Johannes Zenn, Robert Bamler
Variational autoencoders (VAEs) are popular models for representation learning but their encoders are susceptible to overfitting (Cremer et al., 2018) because they are trained on a finite training set instead of the true (continuous) data distribution $p_{\mathrm{data}}(\mathbf{x})$.
1 code implementation • 9 Feb 2023 • Tim Z. Xiao, Robert Bamler
Variational Autoencoders (VAEs) were originally motivated (Kingma & Welling, 2014) as probabilistic generative models in which one performs approximate Bayesian inference.
1 code implementation • 31 Oct 2022 • Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf
We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.
no code implementations • 8 Jun 2022 • Mingtian Zhang, Andi Zhang, Tim Z. Xiao, Yitong Sun, Steven McDonagh
In this work, we propose to unify density ratio based methods under a novel framework that builds energy-based models and employs differing base distributions.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 28 May 2022 • Mingtian Zhang, Tim Z. Xiao, Brooks Paige, David Barber
Latent variable models like the Variational Auto-Encoder (VAE) are commonly used to learn representations of images.
no code implementations • ICLR Workshop SSL-RL 2021 • Mingtian Zhang, Peter Noel Hayes, Tim Z. Xiao, Andi Zhang, David Barber
We introduce a new model-based reinforcement learning framework that aims to tackle environments with high dimensional state spaces.
Model-based Reinforcement Learning reinforcement-learning +2
1 code implementation • 8 Jun 2020 • Tim Z. Xiao, Aidan N. Gomez, Yarin Gal
We detect out-of-training-distribution sentences in Neural Machine Translation using the Bayesian Deep Learning equivalent of Transformer models.