Search Results for author: Mingtian Zhang

Found 23 papers, 7 papers with code

Mafin: Enhancing Black-Box Embeddings with Model Augmented Fine-Tuning

no code implementations19 Feb 2024 Mingtian Zhang, Shawn Lan, Peter Hayes, David Barber

Our results demonstrate that Mafin significantly enhances the performance of the black-box embeddings by only requiring the training of a small augmented model.

Retrieval

Active Preference Learning for Large Language Models

no code implementations12 Feb 2024 William Muldrew, Peter Hayes, Mingtian Zhang, David Barber

A key consideration for aligning these models is how to most effectively use human resources, or model resources in the case where LLMs themselves are used as oracles.

Active Learning Language Modelling

Diffusive Gibbs Sampling

1 code implementation5 Feb 2024 Wenlin Chen, Mingtian Zhang, Brooks Paige, José Miguel Hernández-Lobato, David Barber

The inadequate mixing of conventional Markov Chain Monte Carlo (MCMC) methods for multi-modal distributions presents a significant challenge in practical applications such as Bayesian inference and molecular dynamics.

Bayesian Inference

Incorporating Neuro-Inspired Adaptability for Continual Learning in Artificial Intelligence

1 code implementation29 Aug 2023 Liyuan Wang, Xingxing Zhang, Qian Li, Mingtian Zhang, Hang Su, Jun Zhu, Yi Zhong

Continual learning aims to empower artificial intelligence (AI) with strong adaptability to the real world.

Continual Learning

Moment Matching Denoising Gibbs Sampling

1 code implementation NeurIPS 2023 Mingtian Zhang, Alex Hawkins-Hooker, Brooks Paige, David Barber

Energy-Based Models (EBMs) offer a versatile framework for modeling complex data distributions.

Denoising

Towards Healing the Blindness of Score Matching

no code implementations15 Sep 2022 Mingtian Zhang, Oscar Key, Peter Hayes, David Barber, Brooks Paige, François-Xavier Briol

Score-based divergences have been widely used in machine learning and statistics applications.

Density Estimation

Integrated Weak Learning

no code implementations19 Jun 2022 Peter Hayes, Mingtian Zhang, Raza Habib, Jordan Burgess, Emine Yilmaz, David Barber

We introduce a label model that can learn to aggregate weak supervision sources differently for different datapoints and takes into consideration the performance of the end-model during training.

Out-of-Distribution Detection with Class Ratio Estimation

no code implementations8 Jun 2022 Mingtian Zhang, Andi Zhang, Tim Z. Xiao, Yitong Sun, Steven McDonagh

In this work, we propose to unify density ratio based methods under a novel framework that builds energy-based models and employs differing base distributions.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Improving VAE-based Representation Learning

no code implementations28 May 2022 Mingtian Zhang, Tim Z. Xiao, Brooks Paige, David Barber

Latent variable models like the Variational Auto-Encoder (VAE) are commonly used to learn representations of images.

Representation Learning

Generalization Gap in Amortized Inference

1 code implementation23 May 2022 Mingtian Zhang, Peter Hayes, David Barber

The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression.

Parallel Neural Local Lossless Compression

2 code implementations13 Jan 2022 Mingtian Zhang, James Townsend, Ning Kang, David Barber

The recently proposed Neural Local Lossless Compression (NeLLoC), which is based on a local autoregressive model, has achieved state-of-the-art (SOTA) out-of-distribution (OOD) generalization performance in the image compression task.

Image Compression

AFEC: Active Forgetting of Negative Transfer in Continual Learning

1 code implementation NeurIPS 2021 Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong

Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.

Continual Learning Transfer Learning

A Practical PAC-Bayes Generalisation Bound for Deep Learning

no code implementations29 Sep 2021 Diego Granziol, Mingtian Zhang, Nicholas Baskerville

Under a PAC-Bayesian framework, we derive an implementation efficient parameterisation invariant metric to measure the difference between our true and empirical risk.

Spread Flows for Manifold Modelling

no code implementations29 Sep 2021 Mingtian Zhang, Yitong Sun, Chen Zhang, Steven McDonagh

Flow-based models typically define a latent space with dimensionality identical to the observational space.

On the Out-of-distribution Generalization of Probabilistic Image Modelling

2 code implementations NeurIPS 2021 Mingtian Zhang, Andi Zhang, Steven McDonagh

Out-of-distribution (OOD) detection and lossless compression constitute two problems that can be solved by the training of probabilistic models on a first dataset with subsequent likelihood evaluation on a second dataset, where data distributions differ.

Out-of-Distribution Generalization Out of Distribution (OOD) Detection

On the Latent Space of Flow-based Models

no code implementations1 Jan 2021 Mingtian Zhang, Yitong Sun, Steven McDonagh, Chen Zhang

Flow-based generative models typically define a latent space with dimensionality identical to the observational space.

Identifying Informative Latent Variables Learned by GIN via Mutual Information

no code implementations1 Jan 2021 Chen Zhang, Yitong Sun, Mingtian Zhang

However, in this paper, we point out that the method taken by GIN for informative latent variables identification is not theoretically supported and can be disproved by experiments.

Adversarial Attack Disentanglement +1

Wasserstein Robust Reinforcement Learning

no code implementations30 Jul 2019 Mohammed Amin Abdullah, Hang Ren, Haitham Bou Ammar, Vladimir Milenkovic, Rui Luo, Mingtian Zhang, Jun Wang

Reinforcement learning algorithms, though successful, tend to over-fit to training environments hampering their application to the real-world.

reinforcement-learning Reinforcement Learning (RL)

Variational f-divergence Minimization

no code implementations27 Jul 2019 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution.

Image Generation

Spread Divergence

no code implementations21 Nov 2018 Mingtian Zhang, Peter Hayes, Tom Bird, Raza Habib, David Barber

For distributions $\mathbb{P}$ and $\mathbb{Q}$ with different supports or undefined densities, the divergence $\textrm{D}(\mathbb{P}||\mathbb{Q})$ may not exist.

Training generative latent models by variational f-divergence minimization

no code implementations27 Sep 2018 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific form of f-divergence between the model and data distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.