Search Results for author: Rehan Ahmed

Found 7 papers, 0 papers with code

Bias and Fairness on Multimodal Emotion Detection Algorithms

no code implementations11 May 2022 Matheus Schmitz, Rehan Ahmed, Jimi Cao

Numerous studies have shown that machine learning algorithms can latch onto protected attributes such as race and gender and generate predictions that systematically discriminate against one or more groups.

Fairness Multimodal Emotion Recognition

RED-Attack: Resource Efficient Decision based Attack for Machine Learning

no code implementations29 Jan 2019 Faiq Khalid, Hassan Ali, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

To address this limitation, decision-based attacks have been proposed which can estimate the model but they require several thousand queries to generate a single untargeted attack image.

BIG-bench Machine Learning General Classification +1

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

no code implementations4 Nov 2018 Faiq Khalid, Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs).

Quantization

SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters

no code implementations4 Nov 2018 Hassan Ali, Faiq Khalid, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

In this paper, we introduce a novel technique based on the Secure Selective Convolutional (SSC) techniques in the training loop that increases the robustness of a given DNN by allowing it to learn the data distribution based on the important edges in the input image.

TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks

no code implementations2 Nov 2018 Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference or can be identified during the validation phase.

Autonomous Driving Data Poisoning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.