Search Results for author: Dirk Tasche

Found 13 papers, 1 papers with code

Invariance assumptions for class distribution estimation

no code implementations28 Nov 2023 Dirk Tasche

Assumptions of invariance between the training joint distribution of features and labels and the test distribution can considerably facilitate this task.

Sparse joint shift in multinomial classification

no code implementations29 Mar 2023 Dirk Tasche

We present new results on the transmission of SJS from sets of features to larger sets of features, a conditional correction formula for the class posterior probabilities under the target distribution, identifiability of SJS, and the relationship between SJS and covariate shift.

Classification valid

Factorizable Joint Shift in Multinomial Classification

no code implementations29 Jul 2022 Dirk Tasche

Factorizable joint shift (FJS) was recently proposed as a type of dataset shift for which the complete characteristics can be estimated from feature data observations on the test dataset by a method called Joint Importance Aligning.

Classification Multi-class Classification

Class Prior Estimation under Covariate Shift: No Problem?

no code implementations6 Jun 2022 Dirk Tasche

We show that in the context of classification the property of source and target distributions to be related by covariate shift may be lost if the information content captured in the covariates is reduced, for instance by dropping components or mapping into a lower-dimensional or finite space.

Minimising quantifier variance under prior probability shift

no code implementations17 Jul 2021 Dirk Tasche

For the binary prevalence quantification problem under prior probability shift, we determine the asymptotic variance of the maximum likelihood estimator.

regression

Calibrating sufficiently

no code implementations15 May 2021 Dirk Tasche

When probabilistic classifiers are trained and calibrated, the so-called grouping loss component of the calibration loss can easily be overlooked.

Proving prediction prudence

no code implementations7 May 2020 Dirk Tasche

We study how to perform tests on samples of pairs of observations and predictions in order to assess whether or not the predictions are prudent.

Confidence intervals for class prevalences under prior probability shift

no code implementations10 Jun 2019 Dirk Tasche

Less attention has been paid to the construction of confidence and prediction intervals for estimates of class prevalences.

Prediction Intervals

A plug-in approach to maximising precision at the top and recall at the top

no code implementations9 Apr 2018 Dirk Tasche

For information retrieval and binary classification, we show that precision at the top (or precision at k) and recall at the top (or recall at k) are maximised by thresholding the posterior probability of the positive class.

Binary Classification Classification +3

Fisher consistency for prior probability shift

1 code implementation19 Jan 2017 Dirk Tasche

Lack of Fisher consistency could be used as a criterion to dismiss estimators that are unlikely to deliver precise estimates in test datasets under prior probability and more general dataset shift.

Does quantification without adjustments work?

no code implementations28 Feb 2016 Dirk Tasche

In contrast, quantification has been defined as the task of determining the prevalences of the different sorts of class labels in a target dataset.

Binary Quantification General Classification

Exact fit of simple finite mixture models

no code implementations23 Jun 2014 Dirk Tasche

A classical approach to this problem consists of fitting a mixture of the conditional score distributions observed last year to the current score distribution.

The Law of Total Odds

no code implementations2 Dec 2013 Dirk Tasche

We quantify the bias of the total probability estimator of the unconditional class probabilities and show that the total odds estimator is unbiased.

Binary Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.