no code implementations • 14 Aug 2017 • Kiana Roshan Zamir, Ali Shafahi, Ali Haghani
We also define two indices based on stations' shortages and surpluses that reflect the degree of balancing aid a station needs.
no code implementations • 1 Nov 2017 • Zhongxiang Wang, Ali Shafahi, Ali Haghani
A novel decomposition algorithm is proposed to solve the integrated model.
no code implementations • 1 Nov 2017 • Ali Shafahi, Zhongxiang Wang, Ali Haghani
Through importing the generated trips of the routing problems into the bus scheduling (blocking) problem, it is shown that the proposed model uses up to 13% fewer buses than the common traditional routing models.
4 code implementations • NeurIPS 2018 • Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein
The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data.
no code implementations • ICLR 2019 • Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein
Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.
no code implementations • 27 Nov 2018 • Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S. Davis, Tom Goldstein
Standard adversarial attacks change the predicted class label of a selected image by adding specially tailored small perturbations to its pixels.
6 code implementations • NeurIPS 2019 • Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.
1 code implementation • 15 May 2019 • Chen Zhu, W. Ronny Huang, Ali Shafahi, Hengduo Li, Gavin Taylor, Christoph Studer, Tom Goldstein
Clean-label poisoning attacks inject innocuous looking (and "correctly" labeled) poison images into training data, causing a model to misclassify a targeted image after being trained on this data.
1 code implementation • ICLR 2020 • Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, Tom Goldstein
By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks.
no code implementations • ICML 2020 • Parsa Saadatpanah, Ali Shafahi, Tom Goldstein
Our goal is to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection systems to attacks.
no code implementations • 25 Oct 2019 • Ali Shafahi, Amin Ghiasi, Furong Huang, Tom Goldstein
Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization.
no code implementations • 18 Nov 2019 • Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi
State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.
1 code implementation • ICLR 2020 • Amin Ghiasi, Ali Shafahi, Tom Goldstein
To deflect adversarial attacks, a range of "certified" classifiers have been proposed.
no code implementations • 30 May 2020 • Zheng Xu, Ali Shafahi, Tom Goldstein
Our adaptive networks also outperform larger widened non-adaptive architectures that have 1. 5 times more parameters.
no code implementations • 14 Oct 2020 • Chen Zhu, Zheng Xu, Ali Shafahi, Manli Shu, Amin Ghiasi, Tom Goldstein
Further, we demonstrate that the compact structure and corresponding initialization from the Lottery Ticket Hypothesis can also help in data-free training.
no code implementations • NeurIPS 2023 • Amin Ghiasi, Ali Shafahi, Reza Ardekani
We propose adaptive weight decay, which automatically tunes the hyper-parameter for weight decay during each training iteration.