Search Results for author: Ali Shafahi

Found 16 papers, 5 papers with code

Understanding and Visualizing the District of Columbia Capital Bikeshare System Using Data Analysis for Balancing Purposes

no code implementations14 Aug 2017 Kiana Roshan Zamir, Ali Shafahi, Ali Haghani

We also define two indices based on stations' shortages and surpluses that reflect the degree of balancing aid a station needs.

Management

School bus routing by maximizing trip compatibility

no code implementations1 Nov 2017 Ali Shafahi, Zhongxiang Wang, Ali Haghani

Through importing the generated trips of the routing problems into the bus scheduling (blocking) problem, it is shown that the proposed model uses up to 13% fewer buses than the common traditional routing models.

Blocking Scheduling

Are adversarial examples inevitable?

no code implementations ICLR 2019 Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein

Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.

Universal Adversarial Training

no code implementations27 Nov 2018 Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S. Davis, Tom Goldstein

Standard adversarial attacks change the predicted class label of a selected image by adding specially tailored small perturbations to its pixels.

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

1 code implementation15 May 2019 Chen Zhu, W. Ronny Huang, Ali Shafahi, Hengduo Li, Gavin Taylor, Christoph Studer, Tom Goldstein

Clean-label poisoning attacks inject innocuous looking (and "correctly" labeled) poison images into training data, causing a model to misclassify a targeted image after being trained on this data.

Transfer Learning

Adversarially robust transfer learning

1 code implementation ICLR 2020 Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, Tom Goldstein

By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks.

Transfer Learning

Adversarial attacks on Copyright Detection Systems

no code implementations ICML 2020 Parsa Saadatpanah, Ali Shafahi, Tom Goldstein

Our goal is to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection systems to attacks.

Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?

no code implementations25 Oct 2019 Ali Shafahi, Amin Ghiasi, Furong Huang, Tom Goldstein

Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization.

Adversarial Robustness

WITCHcraft: Efficient PGD attacks with random step size

no code implementations18 Nov 2019 Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points.

Computational Efficiency

Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training

no code implementations30 May 2020 Zheng Xu, Ali Shafahi, Tom Goldstein

Our adaptive networks also outperform larger widened non-adaptive architectures that have 1. 5 times more parameters.

Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer

no code implementations14 Oct 2020 Chen Zhu, Zheng Xu, Ali Shafahi, Manli Shu, Amin Ghiasi, Tom Goldstein

Further, we demonstrate that the compact structure and corresponding initialization from the Lottery Ticket Hypothesis can also help in data-free training.

Data Free Quantization Transfer Learning

Improving Robustness with Adaptive Weight Decay

no code implementations NeurIPS 2023 Amin Ghiasi, Ali Shafahi, Reza Ardekani

We propose adaptive weight decay, which automatically tunes the hyper-parameter for weight decay during each training iteration.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.