Search Results for author: Jonathan H. Manton

Found 10 papers, 0 papers with code

Recursive Filters as Linear Time-Invariant Systems

no code implementations7 Nov 2023 Jonathan H. Manton

Recursive filters are treated as linear time-invariant (LTI) systems but they are not: uninitialised, they have an infinite number of outputs for any given input, while if initialised, they are not time-invariant.

On the tightness of information-theoretic bounds on generalization error of learning algorithms

no code implementations26 Mar 2023 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

However, such a learning rate is typically considered to be ``slow", compared to a ``fast rate" of $O(\lambda/n)$ in many learning scenarios.

An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications

no code implementations12 Jul 2022 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Specifically, we provide generalization error upper bounds for the empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase.

Domain Adaptation Transfer Learning

On Causality in Domain Adaptation and Semi-Supervised Learning: an Information-Theoretic Analysis

no code implementations10 May 2022 Xuetong Wu, Mingming Gong, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

We show that in causal learning, the excess risk depends on the size of the source sample at a rate of O(1/m) only if the labelling distribution between the source and target domains remains unchanged.

Unsupervised Domain Adaptation

Fast Rate Generalization Error Bounds: Variations on a Theme

no code implementations6 May 2022 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

However, such a learning rate is typically considered to be "slow", compared to a "fast rate" of O(1/n) in many learning scenarios.

A Bayesian Approach to (Online) Transfer Learning: Theory and Algorithms

no code implementations3 Sep 2021 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Transfer learning is a machine learning paradigm where knowledge from one problem is utilized to solve a new but related problem.

Learning Theory Transfer Learning

Online Transfer Learning: Negative Transfer and Effect of Prior Knowledge

no code implementations4 May 2021 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

On the one hand, it is conceivable that knowledge from one task could be useful for solving a related problem.

Transfer Learning

Hidden Markov chains and fields with observations in Riemannian manifolds

no code implementations11 Jan 2021 Salem Said, Nicolas Le Bihan, Jonathan H. Manton

Hidden Markov chain, or Markov field, models, with observations in a Euclidean space, play a major role across signal and image processing.

Statistics Theory Statistics Theory

New Insights on Learning Rules for Hopfield Networks: Memory and Objective Function Minimisation

no code implementations4 Oct 2020 Pavel Tolmachev, Jonathan H. Manton

Hopfield neural networks are a possible basis for modelling associative memory in living organisms.

Information-theoretic analysis for transfer learning

no code implementations18 May 2020 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Specifically, we provide generalization error upper bounds for general transfer learning algorithms and extend the results to a specific empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase.

Domain Adaptation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.