We contribute a formal analysis of why the PoL protocol cannot be formally (dis)proven to be robust against spoofing adversaries.
Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.
Machine unlearning, i. e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten.
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.
In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.