Search Results for author: Bernhard Pfahringer

Found 28 papers, 13 papers with code

Adaptive XGBoost for Evolving Data Streams

1 code implementation15 May 2020 Jacob Montiel, Rory Mitchell, Eibe Frank, Bernhard Pfahringer, Talel Abdessalem, Albert Bifet

The proposed method creates new members of the ensemble from mini-batches of data as new data becomes available.

General Classification

Regularisation of Neural Networks by Enforcing Lipschitz Continuity

1 code implementation12 Apr 2018 Henry Gouk, Eibe Frank, Bernhard Pfahringer, Michael J. Cree

We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs.

PolyLM: Learning about Polysemy through Language Modeling

1 code implementation EACL 2021 Alan Ansell, Felipe Bravo-Marquez, Bernhard Pfahringer

To avoid the "meaning conflation deficiency" of word embeddings, a number of models have aimed to embed individual word senses.

Language Modelling Word Embeddings +1

A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal

1 code implementation28 Sep 2022 Yaqian Zhang, Bernhard Pfahringer, Eibe Frank, Albert Bifet, Nick Jin Sean Lim, Yunzhe Jia

Despite its strong empirical performance, rehearsal methods still suffer from a poor approximation of the loss landscape of past data with memory samples.

Continual Learning Reinforcement Learning (RL)

Stochastic Gradient Trees

1 code implementation23 Jan 2019 Henry Gouk, Bernhard Pfahringer, Eibe Frank

We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision.

Classification General Classification +3

Semi-Supervised Learning using Siamese Networks

2 code implementations2 Sep 2021 Attaullah Sahito, Eibe Frank, Bernhard Pfahringer

This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network to obtain a suitable embedding.

Better Self-training for Image Classification through Self-supervision

3 code implementations2 Sep 2021 Attaullah Sahito, Eibe Frank, Bernhard Pfahringer

Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times.

Classification Image Classification

Transfer of Pretrained Model Weights Substantially Improves Semi-Supervised Image Classification

2 code implementations2 Sep 2021 Attaullah Sahito, Eibe Frank, Bernhard Pfahringer

Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training.

Metric Learning Self-Learning +2

Seeing The Whole Patient: Using Multi-Label Medical Text Classification Techniques to Enhance Predictions of Medical Codes

2 code implementations29 Mar 2020 Vithya Yogarajan, Jacob Montiel, Tony Smith, Bernhard Pfahringer

We also show that high dimensional embeddings pre-trained using health-related data present a significant improvement in a multi-label setting, similarly to the way they improve performance for binary classification.

Binary Classification General Classification +2

Improving Predictions of Tail-end Labels using Concatenated BioMed-Transformers for Long Medical Documents

1 code implementation3 Dec 2021 Vithya Yogarajan, Bernhard Pfahringer, Tony Smith, Jacob Montiel

Improving the tail-end label predictions in multi-label classifications of medical text enables the potential to understand patients better and improve care.

Multi-Label Classification Multi-Label Learning

MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

no code implementations16 Apr 2018 Henry Gouk, Bernhard Pfahringer, Eibe Frank, Michael Cree

Effective regularisation of neural networks is essential to combat overfitting due to the large number of parameters involved.

Building Ensembles of Adaptive Nested Dichotomies with Random-Pair Selection

no code implementations7 Apr 2016 Tim Leathart, Bernhard Pfahringer, Eibe Frank

A system of nested dichotomies is a method of decomposing a multi-class problem into a collection of binary problems.

General Classification

Fast Metric Learning For Deep Neural Networks

no code implementations19 Nov 2015 Henry Gouk, Bernhard Pfahringer, Michael Cree

Similarity metrics are a core component of many information retrieval and machine learning systems.

General Classification Information Retrieval +3

Use of Ensembles of Fourier Spectra in Capturing Recurrent Concepts in Data Streams

no code implementations23 Apr 2015 Sripirakas Sakthithasan, Russel Pears, Albert Bifet, Bernhard Pfahringer

In this research, we apply ensembles of Fourier encoded spectra to capture and mine recurring concepts in a data stream environment.

General Classification

Probability Calibration Trees

no code implementations31 Jul 2018 Tim Leathart, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer

Obtaining accurate and well calibrated probability estimates from classifiers is useful in many applications, for example, when minimising the expected cost of classifications.

regression

On the Calibration of Nested Dichotomies for Large Multiclass Tasks

no code implementations8 Sep 2018 Tim Leathart, Eibe Frank, Bernhard Pfahringer, Geoffrey Holmes

Nested dichotomies are used as a method of transforming a multiclass classification problem into a series of binary problems.

Binary Classification General Classification

Ensembles of Nested Dichotomies with Multiple Subset Evaluation

no code implementations8 Sep 2018 Tim Leathart, Eibe Frank, Bernhard Pfahringer, Geoffrey Holmes

A system of nested dichotomies is a method of decomposing a multi-class problem into a collection of binary problems.

A survey of automatic de-identification of longitudinal clinical narratives

no code implementations16 Oct 2018 Vithya Yogarajan, Michael Mayo, Bernhard Pfahringer

Use of medical data, also known as electronic health records, in research helps develop and advance medical science.

De-identification

Automatic end-to-end De-identification: Is high accuracy the only metric?

no code implementations27 Jan 2019 Vithya Yogarajan, Bernhard Pfahringer, Michael Mayo

De-identification of electronic health records (EHR) is a vital step towards advancing health informatics research and maximising the use of available data.

De-identification

Classifier Chains: A Review and Perspectives

no code implementations26 Dec 2019 Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank

This performance led to further studies of how exactly it works, and how it could be improved, and in the recent decade numerous studies have explored classifier chains mechanisms on a theoretical level, and many improvements have been made to the training and inference procedures, such that this method remains among the state-of-the-art options for multi-label learning.

Multi-Label Classification Multi-Label Learning

Machine Learning (In) Security: A Stream of Problems

no code implementations30 Oct 2020 Fabrício Ceschin, Marcus Botacin, Albert Bifet, Bernhard Pfahringer, Luiz S. Oliveira, Heitor Murilo Gomes, André Grégio

Machine Learning (ML) has been widely applied to cybersecurity and is considered state-of-the-art for solving many of the open issues in that field.

BIG-bench Machine Learning

Closed-loop Control for Online Continual Learning

no code implementations29 Sep 2021 Yaqian Zhang, Eibe Frank, Bernhard Pfahringer, Albert Bifet, Nick Jin Sean Lim, Alvin Jia

To address the non-stationarity in the continual learning environment, we employ a Q function with task-specific and task-shared components to support fast adaptation.

Continual Learning

Improving the performance of bagging ensembles for data streams through mini-batching

no code implementations18 Dec 2021 Guilherme Cassales, Heitor Gomes, Albert Bifet, Bernhard Pfahringer, Hermes Senger

This paper proposes a mini-batching strategy that can improve memory access locality and performance of several ensemble algorithms for stream mining in multi-core environments.

Ensemble Learning Incremental Learning

Feature Extractor Stacking for Cross-domain Few-shot Learning

1 code implementation12 May 2022 Hongyu Wang, Eibe Frank, Bernhard Pfahringer, Michael Mayo, Geoffrey Holmes

Recently published CDFSL methods generally construct a universal model that combines knowledge of multiple source domains into one feature extractor.

cross-domain few-shot learning Image Classification

Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class Incremental Learning

1 code implementation30 Oct 2023 Anton Lee, Yaqian Zhang, Heitor Murilo Gomes, Albert Bifet, Bernhard Pfahringer

A common solution to both problems is "replay," where a limited buffer of past instances is utilized to learn cross-task knowledge and mitigate catastrophic interference.

Anomaly Detection Class Incremental Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.