no code implementations • 7 Apr 2016 • Tim Leathart, Bernhard Pfahringer, Eibe Frank
A system of nested dichotomies is a method of decomposing a multi-class problem into a collection of binary problems.
no code implementations • 16 Jul 2017 • Michael Mayo, Eibe Frank
To investigate this question we use population-based optimisation algorithms to generate artificial surrogate training data for naive Bayes for regression.
1 code implementation • 12 Apr 2018 • Henry Gouk, Eibe Frank, Bernhard Pfahringer, Michael J. Cree
We investigate the effect of explicitly enforcing the Lipschitz continuity of neural networks with respect to their inputs.
no code implementations • 16 Apr 2018 • Henry Gouk, Bernhard Pfahringer, Eibe Frank, Michael Cree
Effective regularisation of neural networks is essential to combat overfitting due to the large number of parameters involved.
1 code implementation • 29 Jun 2018 • Rory Mitchell, Andrey Adinets, Thejaswi Rao, Eibe Frank
We describe the multi-GPU gradient boosting algorithm implemented in the XGBoost library (https://github. com/dmlc/xgboost).
no code implementations • 31 Jul 2018 • Tim Leathart, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer
Obtaining accurate and well calibrated probability estimates from classifiers is useful in many applications, for example, when minimising the expected cost of classifications.
no code implementations • 8 Sep 2018 • Tim Leathart, Eibe Frank, Bernhard Pfahringer, Geoffrey Holmes
Nested dichotomies are used as a method of transforming a multiclass classification problem into a series of binary problems.
no code implementations • 8 Sep 2018 • Tim Leathart, Eibe Frank, Bernhard Pfahringer, Geoffrey Holmes
A system of nested dichotomies is a method of decomposing a multi-class problem into a collection of binary problems.
1 code implementation • 23 Jan 2019 • Henry Gouk, Bernhard Pfahringer, Eibe Frank
We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision.
no code implementations • 26 Dec 2019 • Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank
This performance led to further studies of how exactly it works, and how it could be improved, and in the recent decade numerous studies have explored classifier chains mechanisms on a theoretical level, and many improvements have been made to the training and inference procedures, such that this method remains among the state-of-the-art options for multi-label learning.
1 code implementation • 6 Apr 2020 • Rhys Compton, Eibe Frank, Panos Patros, Abigail Koay
code2vec is a recently released embedding approach that uses the proxy task of method name prediction to map Java methods to feature vectors.
1 code implementation • 24 Apr 2020 • Bill Cassidy, Neil D. Reeves, Pappachan Joseph, David Gillespie, Claire O'Shea, Satyan Rajbhandari, Arun G. Maiya, Eibe Frank, Andrew Boulton, David Armstrong, Bijan Najafi, Justina Wu, Moi Hoon Yap
Every 20 seconds, a limb is amputated somewhere in the world due to diabetes.
1 code implementation • 15 May 2020 • Jacob Montiel, Rory Mitchell, Eibe Frank, Bernhard Pfahringer, Talel Abdessalem, Albert Bifet
The proposed method creates new members of the ensemble from mini-batches of data as new data becomes available.
no code implementations • 7 Oct 2020 • Moi Hoon Yap, Ryo Hachiuma, Azadeh Alavi, Raphael Brungel, Bill Cassidy, Manu Goyal, Hongtao Zhu, Johannes Ruckert, Moshe Olshansky, Xiao Huang, Hideo Saito, Saeed Hassanpour, Christoph M. Friedrich, David Ascher, Anping Song, Hiroki Kajita, David Gillespie, Neil D. Reeves, Joseph Pappachan, Claire O'Shea, Eibe Frank
DFUC2020 provided participants with a comprehensive dataset consisting of 2, 000 images for training and 2, 000 images for testing.
4 code implementations • 27 Oct 2020 • Rory Mitchell, Eibe Frank, Geoffrey Holmes
SHAP (SHapley Additive exPlanation) values provide a game theoretic interpretation of the predictions of machine learning models based on Shapley values.
no code implementations • 25 Apr 2021 • Rory Mitchell, Joshua Cooper, Eibe Frank, Geoffrey Holmes
Game-theoretic attribution techniques based on Shapley values are used to interpret black-box machine learning models, but their exact calculation is generally NP-hard, requiring approximation methods for non-trivial models.
2 code implementations • 2 Sep 2021 • Attaullah Sahito, Eibe Frank, Bernhard Pfahringer
Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training.
3 code implementations • 2 Sep 2021 • Attaullah Sahito, Eibe Frank, Bernhard Pfahringer
Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times.
2 code implementations • 2 Sep 2021 • Attaullah Sahito, Eibe Frank, Bernhard Pfahringer
This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network to obtain a suitable embedding.
no code implementations • 29 Sep 2021 • Yaqian Zhang, Eibe Frank, Bernhard Pfahringer, Albert Bifet, Nick Jin Sean Lim, Alvin Jia
To address the non-stationarity in the continual learning environment, we employ a Q function with task-specific and task-shared components to support fast adaptation.
1 code implementation • 7 Oct 2021 • Zac Pullar-Strecker, Katharina Dost, Eibe Frank, Jörg Wicker
This work enables practitioners to employ active learning by providing actionable recommendations for which stopping criteria are best for a given real-world scenario.
1 code implementation • 12 May 2022 • Hongyu Wang, Eibe Frank, Bernhard Pfahringer, Michael Mayo, Geoffrey Holmes
Recently published CDFSL methods generally construct a universal model that combines knowledge of multiple source domains into one feature extractor.
1 code implementation • 28 Sep 2022 • Yaqian Zhang, Bernhard Pfahringer, Eibe Frank, Albert Bifet, Nick Jin Sean Lim, Yunzhe Jia
Despite its strong empirical performance, rehearsal methods still suffer from a poor approximation of the loss landscape of past data with memory samples.