Search Results for author: Scott Alexander

Found 1 papers, 0 papers with code

Subspace Methods That Are Resistant to a Limited Number of Features Corrupted by an Adversary

no code implementations19 Feb 2019 Chris Mesterharm, Rauf Izmailov, Scott Alexander, Simon Tsang

In this paper, we consider batch supervised learning where an adversary is allowed to corrupt instances with arbitrarily large noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.