Existing work on trustworthy machine learning (ML) often concentrates on individual aspects of trust, such as fairness or privacy.
Deploying machine learning (ML) models often requires both fairness and privacy guarantees.
They empirically argued the benefit of this approach by showing how spoofing--computing a proof for a stolen model--is as expensive as obtaining the proof honestly by training the model.
We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations to compute $p$-values associated with the end-to-end model prediction.
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning.
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.
We thus introduce $dataset$ $inference$, the process of identifying whether a suspected model copy has private knowledge from the original model's dataset, as a defense against model stealing.
In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.
Our family of fairness notions corresponds to a new interpretation of economic models of Equality of Opportunity (EOP), and it includes most existing notions of fairness as special cases.
Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.