Search Results for author: Tom Diethe

Found 18 papers, 6 papers with code

Continual Density Ratio Estimation in an Online Setting

no code implementations9 Mar 2021 Yu Chen, Song Liu, Tom Diethe, Peter Flach

To the best of our knowledge, there is no existing method that can evaluate generative models in continual learning without storing samples from the original distribution.

Continual Learning Decision Making +1

Discriminative Representation Loss (DRL): A More Efficient Approach than Gradient Re-Projection in Continual Learning

no code implementations28 Sep 2020 Yu Chen, Tom Diethe, Peter Flach

The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting.

Continual Learning Metric Learning

Semi-Discriminative Representation Loss for Online Continual Learning

1 code implementation19 Jun 2020 Yu Chen, Tom Diethe, Peter Flach

The use of episodic memory in continual learning has demonstrated effectiveness for alleviating catastrophic forgetting.

Continual Learning Metric Learning

Optimal Continual Learning has Perfect Memory and is NP-hard

no code implementations ICML 2020 Jeremias Knoblauch, Hisham Husain, Tom Diethe

Continual Learning (CL) algorithms incrementally learn a predictor or representation across multiple sequentially observed tasks.

Continual Learning

Similarity of Neural Networks with Gradients

3 code implementations25 Mar 2020 Shuai Tang, Wesley J. Maddox, Charlie Dickens, Tom Diethe, Andreas Damianou

A suitable similarity index for comparing learnt neural networks plays an important role in understanding the behaviour of the highly-nonlinear functions, and can provide insights on further theoretical analysis and empirical studies.

Network Pruning

Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text

no code implementations20 Oct 2019 Oluwaseyi Feyisetan, Tom Diethe, Thomas Drake

In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text.

BIG-bench Machine Learning

Privacy- and Utility-Preserving Textual Analysis via Calibrated Multivariate Perturbations

1 code implementation20 Oct 2019 Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, Tom Diethe

We conduct privacy audit experiments against 2 baseline models and utility experiments on 3 datasets to demonstrate the tradeoff between privacy and utility for varying values of epsilon on different task types.

Privacy Preserving

Continual Density Ratio Estimation (CDRE): A new method for evaluating generative models in continual learning

no code implementations25 Sep 2019 Yu Chen, Song Liu, Tom Diethe, Peter Flach

We propose a new method Continual Density Ratio Estimation (CDRE), which can estimate density ratios between a target distribution of real samples and a distribution of samples generated by a model while the model is changing over time and the data of the target distribution is not available after a certain time point.

Continual Learning Density Ratio Estimation

HyperStream: a Workflow Engine for Streaming Data

1 code implementation7 Aug 2019 Tom Diethe, Meelis Kull, Niall Twomey, Kacper Sokol, Hao Song, Miquel Perello-Nieto, Emma Tonkin, Peter Flach

This paper describes HyperStream, a large-scale, flexible and robust software package, written in the Python language, for processing streaming data with workflow creation capabilities.

BIG-bench Machine Learning

Automatic Discovery of Privacy-Utility Pareto Fronts

1 code implementation26 May 2019 Brendan Avent, Javier Gonzalez, Tom Diethe, Andrei Paleyes, Borja Balle

Differential privacy is a mathematical framework for privacy-preserving data analysis.

Privacy Preserving

Distribution Calibration for Regression

no code implementations15 May 2019 Hao Song, Tom Diethe, Meelis Kull, Peter Flach

We are concerned with obtaining well-calibrated output distributions from regression models.

Gaussian Processes regression

Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients

no code implementations24 Apr 2019 Yu Chen, Tom Diethe, Neil Lawrence

Conventional models tend to forget the knowledge of previous tasks while learning a new task, a phenomenon known as catastrophic forgetting.

Continual Learning

Privacy-preserving Active Learning on Sensitive Data for User Intent Classification

no code implementations26 Mar 2019 Oluwaseyi Feyisetan, Thomas Drake, Borja Balle, Tom Diethe

Active learning holds promise of significantly reducing data annotation costs while maintaining reasonable model performance.

Active Learning General Classification +3

Continual Learning in Practice

no code implementations12 Mar 2019 Tom Diethe, Tom Borchert, Eno Thereska, Borja Balle, Neil Lawrence

This paper describes a reference architecture for self-maintaining systems that can learn continually, as data arrives.

AutoML BIG-bench Machine Learning +1

$β^3$-IRT: A New Item Response Model and its Applications

1 code implementation10 Mar 2019 Yu Chen, Telmo Silva Filho, Ricardo B. C. Prudêncio, Tom Diethe, Peter Flach

Item Response Theory (IRT) aims to assess latent abilities of respondents based on the correctness of their answers in aptitude test items with different difficulty levels.

Probabilistic Sensor Fusion for Ambient Assisted Living

no code implementations4 Feb 2017 Tom Diethe, Niall Twomey, Meelis Kull, Peter Flach, Ian Craddock

There is a widely-accepted need to revise current forms of health-care provision, with particular interest in sensing systems in the home.

Activity Recognition

A Note on the Kullback-Leibler Divergence for the von Mises-Fisher distribution

no code implementations25 Feb 2015 Tom Diethe

We present a derivation of the Kullback Leibler (KL)-Divergence (also known as Relative Entropy) for the von Mises Fisher (VMF) Distribution in $d$-dimensions.

Cannot find the paper you are looking for? You can Submit a new open access paper.