no code implementations • 18 Jul 2022 • Changyong Oh, Roberto Bondesan, Dana Kianfar, Rehan Ahmed, Rishubh Khurana, Payal Agarwal, Romain Lepert, Mysore Sriram, Max Welling
Macro placement is the problem of placing memory blocks on a chip canvas.
no code implementations • 11 May 2022 • Matheus Schmitz, Rehan Ahmed, Jimi Cao
Numerous studies have shown that machine learning algorithms can latch onto protected attributes such as race and gender and generate predictions that systematically discriminate against one or more groups.
no code implementations • 28 May 2021 • Alison OShea, Rehan Ahmed, Gordon Lightbody, Sean Mathieson, Elena Pavlidis, Rhodri Lloyd, Francesco Pisani, Willian Marnane, Geraldine Boylan, Andriy Temko
An AUC of 88. 3% was obtained when tested on preterm EEG as compared to 96. 6% obtained when tested on term EEG.
1 code implementation • 29 Jan 2019 • Faiq Khalid, Hassan Ali, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
To address this limitation, decision-based attacks have been proposed which can estimate the model but they require several thousand queries to generate a single untargeted attack image.
1 code implementation • 4 Nov 2018 • Hassan Ali, Faiq Khalid, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
In this paper, we introduce a novel technique based on the Secure Selective Convolutional (SSC) techniques in the training loop that increases the robustness of a given DNN by allowing it to learn the data distribution based on the important edges in the input image.
1 code implementation • 4 Nov 2018 • Faiq Khalid, Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs).
no code implementations • 2 Nov 2018 • Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference or can be identified during the validation phase.