Search Results for author: Daniel F. Schmidt

Found 14 papers, 9 papers with code

GRASP: Grouped Regression with Adaptive Shrinkage Priors

1 code implementation22 Jun 2025 Shu Yu Tew, Daniel F. Schmidt, Mario Boley

Extending the non-tail adaptive grouped half-Cauchy hierarchy of Xu et al., GRASP assigns the NBP prior to both local and group shrinkage parameters allowing adaptive sparsity within and across groups.

regression

Efficient Parameter Estimation for Bayesian Network Classifiers using Hierarchical Linear Smoothing

1 code implementation29 May 2025 Connor Cooper, Geoffrey I. Webb, Daniel F. Schmidt

Bayesian network classifiers (BNCs) possess a number of properties desirable for a modern classifier: They are easily interpretable, highly scalable, and offer adaptable complexity.

parameter estimation

Improving Random Forests by Smoothing

no code implementations11 May 2025 Ziyi Liu, Phuc Luong, Mario Boley, Daniel F. Schmidt

Gaussian process regression is a popular model in the small data regime due to its sound uncertainty quantification and the exploitation of the smoothness of the regression function that is encountered in a wide range of practical problems.

Gaussian Processes regression +1

Computing Marginal and Conditional Divergences between Decomposable Models with Applications

no code implementations13 Oct 2023 Loong Kuan Lee, Geoffrey I. Webb, Daniel F. Schmidt, Nico Piatkowski

Doing so tractably is non-trivial as we need to decompose the divergence between these distributions and therefore, require a decomposition over the marginal and conditional distributions of these models.

QUANT: A Minimalist Interval Method for Time Series Classification

1 code implementation2 Aug 2023 Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb

We show that it is possible to achieve the same accuracy, on average, as the most accurate existing interval methods for time series classification on a standard set of benchmark datasets using a single type of feature (quantiles), fixed intervals, and an 'off the shelf' classifier.

Classification CPU +2

Sparse Horseshoe Estimation via Expectation-Maximisation

1 code implementation7 Nov 2022 Shu Yu Tew, Daniel F. Schmidt, Enes Makalic

A particular strength of our approach is that the M-step depends only on the form of the prior and it is independent of the form of the likelihood.

Form

HYDRA: Competing convolutional kernels for fast and accurate time series classification

1 code implementation25 Mar 2022 Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb

We present HYDRA, a simple, fast, and accurate dictionary method for time series classification using competing convolutional kernels, combining key aspects of both ROCKET and conventional dictionary methods.

Time Series Time Series Anomaly Detection +1

MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification

2 code implementations16 Dec 2020 Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb

ROCKET achieves state-of-the-art accuracy with a fraction of the computational expense of most existing methods by transforming input time series using random convolutional kernels, and using the transformed features to train a linear classifier.

General Classification Time Series +2

Log-Scale Shrinkage Priors and Adaptive Bayesian Global-Local Shrinkage Estimation

no code implementations8 Jan 2018 Daniel F. Schmidt, Enes Makalic

Simulations show that the adaptive log-$t$ procedure appears to always perform well, irrespective of the level of sparsity or signal-to-noise ratio of the underlying model.

Cannot find the paper you are looking for? You can Submit a new open access paper.