Search Results for author: Ayesha Siddique

Found 5 papers, 0 papers with code

RobustPdM: Designing Robust Predictive Maintenance against Adversarial Attacks

no code implementations25 Jan 2023 Ayesha Siddique, Ripan Kumar Kundu, Gautam Raj Mode, Khaza Anuarul Hoque

We observe that approximate adversarial training can significantly improve the robustness of PdM models (up to 54X) and outperforms the state-of-the-art PdM defense methods by offering 3X more robustness.

Adversarial Defense

Security-Aware Approximate Spiking Neural Networks

no code implementations12 Jan 2023 Syed Tihaam Ahmad, Ayesha Siddique, Khaza Anuarul Hoque

Therefore, researchers in the recent past have extensively studied the robustness and defense of DNNs and SNNs under adversarial attacks.

Quantization

Improving Reliability of Spiking Neural Networks through Fault Aware Threshold Voltage Optimization

no code implementations12 Jan 2023 Ayesha Siddique, Khaza Anuarul Hoque

Our proposed FalVolt mitigation method improves the performance of systolicSNNs by enabling them to operate at fault rates of up to 60\%, with a negligible drop in classification accuracy (as low as 0. 1\%).

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

no code implementations2 Dec 2021 Ayesha Siddique, Khaza Anuarul Hoque

Approximate computing is known for its effectiveness in improvising the energy efficiency of deep neural network (DNN) accelerators at the cost of slight accuracy loss.

Adversarial Robustness

Exploring Fault-Energy Trade-offs in Approximate DNN Hardware Accelerators

no code implementations8 Jan 2021 Ayesha Siddique, Kanad Basu, Khaza Anuarul Hoque

Our quantitative analysis shows that the permanent faults exacerbate the accuracy loss in AxDNNs when compared to the accurate DNN accelerators.

Cannot find the paper you are looking for? You can Submit a new open access paper.