1 code implementation • 28 Jan 2024 • Angus Dempster, Geoffrey I. Webb, Daniel F. Schmidt
Logistic regression is a ubiquitous method for probabilistic classification.
1 code implementation • 2 Aug 2023 • Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb
We show that it is possible to achieve the same accuracy, on average, as the most accurate existing interval methods for time series classification on a standard set of benchmark datasets using a single type of feature (quantiles), fixed intervals, and an 'off the shelf' classifier.
2 code implementations • 19 May 2023 • Ali Ismail-Fawaz, Angus Dempster, Chang Wei Tan, Matthieu Herrmann, Lynn Miller, Daniel F. Schmidt, Stefano Berretti, Jonathan Weber, Maxime Devanne, Germain Forestier, Geoffrey I. Webb
The measurement of progress using benchmarks evaluations is ubiquitous in computer science and machine learning.
1 code implementation • 25 Mar 2022 • Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb
We present HYDRA, a simple, fast, and accurate dictionary method for time series classification using competing convolutional kernels, combining key aspects of both ROCKET and conventional dictionary methods.
1 code implementation • 31 Jan 2021 • Chang Wei Tan, Angus Dempster, Christoph Bergmeir, Geoffrey I. Webb
We propose MultiRocket, a fast time series classification (TSC) algorithm that achieves state-of-the-art performance with a tiny fraction of the time and without the complex ensembling structure of many state-of-the-art methods.
2 code implementations • 16 Dec 2020 • Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb
ROCKET achieves state-of-the-art accuracy with a fraction of the computational expense of most existing methods by transforming input time series using random convolutional kernels, and using the transformed features to train a linear classifier.
6 code implementations • 29 Oct 2019 • Angus Dempster, François Petitjean, Geoffrey I. Webb
Most methods for time series classification that attain state-of-the-art accuracy have high computational complexity, requiring significant training time even for smaller datasets, and are intractable for larger datasets.