no code implementations • 5 Jul 2023 • Osman Emre Dai, Daniel Cullina, Negar Kiyavash
We study an instance of the database alignment problem with multivariate Gaussian features and derive results that apply both for database alignment and for planted matching, demonstrating the connection between them.
no code implementations • 21 Feb 2023 • Sihui Dai, Wenxin Ding, Arjun Nitin Bhagoji, Daniel Cullina, Ben Y. Zhao, Haitao Zheng, Prateek Mittal
Finding classifiers robust to adversarial examples is critical for their safe deployment.
no code implementations • 29 Sep 2021 • Arjun Nitin Bhagoji, Daniel Cullina, Ben Zhao
In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provide bounds on the robustness of any classifier trained on top of it.
1 code implementation • 16 Apr 2021 • Arjun Nitin Bhagoji, Daniel Cullina, Vikash Sehwag, Prateek Mittal
In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible.
1 code implementation • NeurIPS 2019 • Arjun Nitin Bhagoji, Daniel Cullina, Prateek Mittal
In this paper, we use optimal transport to characterize the minimum possible loss in an adversarial classification scenario.
no code implementations • 5 May 2019 • Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal
A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification.
no code implementations • 4 Mar 2019 • Osman Emre Dai, Daniel Cullina, Negar Kiyavash
We consider the problem of aligning a pair of databases with jointly Gaussian features.
no code implementations • NeurIPS 2018 • Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal
We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question.
no code implementations • 10 Sep 2018 • Daniel Cullina, Negar Kiyavash, Prateek Mittal, H. Vincent Poor
This estimator searches for an alignment in which the intersection of the correlated graphs using this alignment has a minimum degree of $k$.
no code implementations • 5 Jun 2018 • Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal
We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question.
no code implementations • 25 Apr 2018 • Osman Emre Dai, Daniel Cullina, Negar Kiyavash, Matthias Grossglauser
Graph alignment in two correlated random graphs refers to the task of identifying the correspondence between vertex sets of the graphs.
no code implementations • 18 Nov 2017 • Daniel Cullina, Negar Kiyavash
We consider the problem of perfectly recovering the vertex correspondence between two correlated Erd\H{o}s-R\'enyi (ER) graphs on the same vertex set.
no code implementations • 9 Apr 2017 • Arjun Nitin Bhagoji, Daniel Cullina, Chawin Sitawarin, Prateek Mittal
We propose the use of data transformations as a defense against evasion attacks on ML classifiers.
no code implementations • 25 Mar 2016 • Daniel Cullina, Kushagra Singhal, Negar Kiyavash, Prateek Mittal
We ask the question "Does there exist a regime where the network cannot be deanonymized perfectly, yet the community structure could be learned?."
no code implementations • 2 Feb 2016 • Daniel Cullina, Negar Kiyavash
For a pair of correlated graphs on the same vertex set, the correspondence between the vertices can be obscured by randomly permuting the vertex labels of one of the graphs.