Conformal prediction is a distribution-free technique for establishing valid prediction intervals.
Across applications spanning supervised classification and sequential control, deep learning has been reported to find "shortcut" solutions that fail catastrophically under minor changes in the data distribution.
Deep neural networks are known to be vulnerable to unseen data: they may wrongly assign high confidence stcores to out-distribuion samples.
By considering the entire training trajectory and focusing on early-stopping iterates, compatibility exploits the data and the algorithm information and is therefore a more suitable notion for generalization.
It is challenging to deal with censored data, where we only have access to the incomplete information of survival time instead of its exact value.
Pretext-based self-supervised learning learns the semantic representation via a handcrafted pretext task over unlabeled data and then uses the learned representation for downstream tasks, which effectively reduces the sample complexity of downstream tasks under Conditional Independence (CI) condition.
First, we apply a machine learning method to fit the ground truth function on the training set and calculate its linear approximation.