no code implementations • 24 Mar 2018 • Tegjyot Singh Sethi, Mehmed Kantardzic
While traditional partially labeled concept drift detection methodologies fail to detect adversarial drifts, the proposed framework is able to detect such drifts and operates with <6% labeled data, on average.
no code implementations • 24 Mar 2018 • Tegjyot Singh Sethi, Mehmed Kantardzic, Joung Woo Ryu
The adversary assumes a black box model of the defender's classifier and can launch indiscriminate attacks on it, without information of the defender's model type, training data or the domain of application.
no code implementations • 24 Mar 2018 • Tegjyot Singh Sethi, Mehmed Kantardzic, Lingyu Lyua, Jiashun Chen
While most works in the security of machine learning has concentrated on the evasion resistance (a) problem, there is little work in the areas of reacting to attacks (b and c).
2 code implementations • 31 Mar 2017 • Tegjyot Singh Sethi, Mehmed Kantardzic
On the other hand, unsupervised change detection techniques are unreliable, as they produce a large number of false alarms.
no code implementations • 30 Mar 2017 • Lingyu Lyu, Mehmed Kantardzic
However, for getting grades for complex tasks, which require specific skills and efforts for grading, crowdsourcing encounters a restriction of insufficient knowledge of the workers from the crowd.
no code implementations • 23 Mar 2017 • Tegjyot Singh Sethi, Mehmed Kantardzic
In this paper, an adversary's view point of a classification based system, is presented.