no code implementations • 18 Jun 2014 • Anthony Bagnall, Jason Lines
Specifically, we compare 1-NN classifiers with Euclidean and DTW distance to standard classifiers, examine whether the performance of 1-NN Euclidean approaches that of 1-NN DTW as the number of cases increases, assess whether there is any benefit of setting $k$ for $k$-NN through cross validation whether it is worth setting the warping path for DTW through cross validation and finally is it better to use a window or weighting for DTW.
no code implementations • 18 Jun 2014 • Anthony Bagnall, Luke Davis
Our approach to automated bone age assessment is to modularise the algorithm into the following three stages: segment and verify hand outline; segment and verify bones; use the bone outlines to construct models of age.
no code implementations • 14 Jul 2014 • Anthony Bagnall, Jon Hills, Jason Lines
Two are greedy algorithms based on pairwise comparison, and the third uses a heuristic measure of set quality to find the motif set directly.
no code implementations • 17 Sep 2014 • Anthony Bagnall, Reda Younsi
We propose two ensemble methods tailored to the RSC classifier; $\alpha \beta$RSE, an ensemble based on instance resampling and $\alpha$RSSE, a subspace ensemble.
no code implementations • 4 Feb 2016 • Anthony Bagnall, Aaron Bostrom, James Large, Jason Lines
These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive.
no code implementations • 20 Mar 2017 • Anthony Bagnall, Gavin C. Cawley
We demonstrate that, for a range of state-of-the-art machine learning algorithms, the differences in generalisation performance obtained using default parameter settings and using parameters tuned via cross-validation can be similar in magnitude to the differences in performance observed between state-of-the-art and uncompetitive learning systems.
no code implementations • 28 Mar 2017 • Anthony Bagnall, Aaron Bostrom, James Large, Jason Lines
We describe what results we expected from each class of algorithm and data representation, then observe whether these prior beliefs are supported by the experimental evidence.
no code implementations • 25 Oct 2017 • James Large, Jason Lines, Anthony Bagnall
We show that the Heterogeneous Ensembles of Standard Classification Algorithms (HESCA), which ensembles based on error estimates formed on the train data, is significantly better (in terms of error, balanced error, negative log likelihood and area under the ROC curve) than its individual components, picking the component that is best on train data, and a support vector machine tuned over 1089 different parameter configurations.
no code implementations • 18 Dec 2017 • Aaron Bostrom, Anthony Bagnall
Shapelets are phase independent subsequences designed for time series classification.
no code implementations • 18 Sep 2018 • James Large, Anthony Bagnall, Simon Malinowski, Romain Tavenard
We find that whilst ensembling is a key component for both algorithms, the effect of the other components is mixed and more complex.
2 code implementations • 17 Oct 2018 • Hoang Anh Dau, Anthony Bagnall, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, Eamonn Keogh
This paper introduces and will focus on the new data expansion from 85 to 128 data sets.
1 code implementation • 31 Oct 2018 • Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, Eamonn Keogh
In 2002, the UCR time series classification archive was first released with sixteen datasets.
no code implementations • 1 Nov 2018 • James Large, Paul Southam, Anthony Bagnall
tl;dr: no, it cannot, at least not on average on the standard archive problems.
1 code implementation • 26 Jul 2019 • Matthew Middlehurst, William Vickers, Anthony Bagnall
Dictionary based classifiers are a family of algorithms for time series classification (TSC), that focus on capturing the frequency of pattern occurrences in a time series.
no code implementations • 12 Sep 2019 • Anthony Bagnall, Franz Király, Markus Löning, Matthew Middlehurst, George Oastler
We demonstrate correctness through equivalence of accuracy on a range of standard test problems and compare the build time of the different implementations.
no code implementations • 17 Sep 2019 • Markus Löning, Anthony Bagnall, Sajaysurya Ganesh, Viktor Kazakov, Jason Lines, Franz J. Király
We present sktime -- a new scikit-learn compatible Python library with a unified interface for machine learning with time series.
no code implementations • 27 Nov 2019 • Anthony Bagnall, James Large, Matthew Middlehurst
We call this type of approach to TSC dictionary based classification.
no code implementations • 13 Apr 2020 • Anthony Bagnall, Michael Flynn, James Large, Jason Lines, Matthew Middlehurst
The Hierarchical Vote Collective of Transformation-based Ensembles (HIVE-COTE) is a heterogeneous meta ensemble for time series classification.
no code implementations • 25 Apr 2020 • Anthony Bagnall, Paul Southam, James Large, Richard Harvey
Given the massive volume of luggage that needs to be screened for this threat, the best way to automate the detection is to first filter whether a bag contains an electric device or not, and if it does, to identify the number of devices and their location.
no code implementations • 26 Jul 2020 • Alejandro Pasos Ruiz, Michael Flynn, Anthony Bagnall
The simplest approach to MTSC is to ensemble univariate classifiers over the multivariate dimensions.
no code implementations • 20 Aug 2020 • Matthew Middlehurst, James Large, Anthony Bagnall
We propose combining TSF and catch22 to form a new classifier, the Canonical Interval Forest (CIF).
1 code implementation • 15 Apr 2021 • Matthew Middlehurst, James Large, Michael Flynn, Jason Lines, Aaron Bostrom, Anthony Bagnall
Since it was first proposed in 2016, the algorithm has remained state of the art for accuracy on the UCR time series classification archive.
no code implementations • 9 May 2021 • Matthew Middlehurst, James Large, Gavin Cawley, Anthony Bagnall
We demonstrate that the temporal dictionary ensemble (TDE) is more accurate than other dictionary based approaches.
no code implementations • 28 Jan 2022 • Matthew Middlehurst, Anthony Bagnall
There have recently been significant advances in the accuracy of algorithms proposed for time series classification (TSC).
no code implementations • 30 May 2022 • Chris Holder, Matthew Middlehurst, Anthony Bagnall
Our conclusion is to recommend MSM with k-medoids as the benchmark algorithm for clustering time series with elastic distance measures.
1 code implementation • 25 Apr 2023 • Matthew Middlehurst, Patrick Schäfer, Anthony Bagnall
We introduce 30 classification datasets either recently donated to the archive or reformatted to the TSC format, and use these to further evaluate the best performing algorithm from each category.
no code implementations • 2 May 2023 • David Guijo-Rubio, Matthew Middlehurst, Guilherme Arcencio, Diego Furtado Silva, Anthony Bagnall
FreshPRINCE is a pipeline estimator consisting of a transform into a wide range of summary features followed by a rotation forest regressor.
no code implementations • 16 Jun 2023 • Rafael Ayllón-Gavilán, David Guijo-Rubio, Pedro Antonio Gutiérrez, Anthony Bagnall, César Hervás-Martínez
Hence, this paper presents a first benchmarking of TSOC methodologies, exploiting the ordering of the target labels to boost the performance of current TSC state-of-the-art.
1 code implementation • Advanced Analytics and Learning on Temporal Data 2023 • Arik Ermshaus, Patrick Schäfer, Anthony Bagnall, Thomas Guyet, Georgiana Ifrim, Vincent Lemaire, Ulf Leser, Colin Leverger, Simon Malinowski
Despite its importance, existing methods demonstrate limited efficacy on real-world multivariate time series data.