no code implementations • 21 Feb 2020 • Sharon Qian, Dimitris Kalimeris, Gal Kaplun, Yaron Singer
Despite the vast success of Deep Neural Networks in numerous application domains, it has been shown that such models are not robust i. e., they are vulnerable to small adversarial perturbations of the input.
1 code implementation • NeurIPS 2019 • Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak
We perform an experimental study of the dynamics of Stochastic Gradient Descent (SGD) in learning deep neural networks for several real and synthetic classification tasks.
no code implementations • 9 Mar 2019 • Dimitris Kalimeris, Gal Kaplun, Yaron Singer
A recent surging research direction in influence maximization focuses on the case where the edge probabilities on the graph are not arbitrary but are generated as a function of the features of the users and a global hyperparameter.
no code implementations • ICML 2018 • Dimitris Kalimeris, Yaron Singer, Karthik Subbian, Udi Weinsberg
Despite this obstacle, we can shrink the best-known sample complexity bound for learning IC by a factor of |E|/d where |E| is the number of edges in the graph and d is the dimension of the hyperparameter.