no code implementations • 1 Apr 2025 • Jaechul Roh, Virat Shejwalkar, Amir Houmansadr
Large Audio Language Models (LALMs) have significantly advanced audio understanding but introduce critical security risks, particularly through audio jailbreaks.
1 code implementation • 4 Feb 2025 • Abhinav Kumar, Jaechul Roh, Ali Naseh, Marzena Karpinska, Mohit Iyyer, Amir Houmansadr, Eugene Bagdasarian
We evaluated our attack across closed-(OpenAI o1, o1-mini, o3-mini) and open-(DeepSeek R1) weights reasoning models on the FreshQA and SQuAD datasets.
no code implementations • 3 Feb 2025 • Momin Ahmad Khan, Virat Shejwalkar, Yasra Chandio, Amir Houmansadr, Fatima Muhammad Anwar
While the community has designed various defenses to counter the threat of poisoning attacks in Federated Learning (FL), there are no guidelines for evaluating these defenses.
no code implementations • 1 Feb 2025 • Ali Naseh, Yuefeng Peng, Anshuman Suri, Harsh Chaudhari, Alina Oprea, Amir Houmansadr
Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to generate grounded responses by leveraging external knowledge databases without altering model parameters.
no code implementations • 15 Nov 2024 • Hesam Hosseini, Ghazal Hosseini Mighan, Amirabbas Afzali, Sajjad Amini, Amir Houmansadr
Based on this framework, we demonstrate that zero-shot unsupervised semantic segmentation can be performed effectively without any fine-tuning using a model pre-trained for tasks other than segmentation.
no code implementations • 3 Nov 2024 • Yuefeng Peng, Junda Wang, Hong Yu, Amir Houmansadr
For example, on Gemma-2B-IT, we show that with only 5\% poisoned data, our method achieves an average success rate of 94. 1\% for verbatim extraction (ROUGE-L score: 82. 1) and 63. 6\% for paraphrased extraction (average ROUGE score: 66. 4) across four datasets.
no code implementations • 15 Oct 2024 • Hyejun Jeong, Shiqing Ma, Amir Houmansadr
In generative AI, such as Large Language Models, the impact of bias is even more profound compared to the classification models.
no code implementations • 21 Jun 2024 • Ali Naseh, Jaechul Roh, Eugene Bagdasaryan, Amir Houmansadr
Furthermore, we show how the current state-of-the-art generative models make this attack both cheap and feasible for any adversary, with costs ranging between $12-$18.
1 code implementation • 20 Jun 2024 • Yapei Chang, Kalpesh Krishna, Amir Houmansadr, John Wieting, Mohit Iyyer
The most effective techniques to detect LLM-generated text rely on inserting a detectable signature -- or watermark -- during the model's decoding process.
1 code implementation • 9 Jun 2024 • Sajjad Amini, Mohammadreza Teymoorianfard, Shiqing Ma, Amir Houmansadr
We present a simple yet effective method to improve the robustness of both Convolutional and attention-based Neural Networks against adversarial examples by post-processing an adversarially trained model.
no code implementations • 27 May 2024 • Yuefeng Peng, Jaechul Roh, Subhransu Maji, Amir Houmansadr
The core idea is that a member sample exhibits more resistance to adversarial perturbations than a non-member.
no code implementations • 21 Apr 2024 • Ali Naseh, Katherine Thai, Mohit Iyyer, Amir Houmansadr
With the digital imagery landscape rapidly evolving, image stocks and AI-generated image marketplaces have become central to visual media.
no code implementations • 10 Mar 2024 • Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr
Federated learning (FL) is a distributed machine learning paradigm that enables training models on decentralized data.
no code implementations • 4 Mar 2024 • Hyejun Jeong, Shiqing Ma, Amir Houmansadr
This SoK paper aims to take a deep look at the \emph{federated unlearning} literature, with the goal of identifying research trends and challenges in this emerging field.
no code implementations • 7 Dec 2023 • Yuefeng Peng, Ali Naseh, Amir Houmansadr
A unique feature of DIFFENCE is that it works on input samples only, without modifying the training or inference phase of the target model.
no code implementations • 6 Dec 2023 • Ali Naseh, Jaechul Roh, Amir Houmansadr
Diffusion-based models, such as the Stable Diffusion model, have revolutionized text-to-image synthesis with their ability to produce high-quality, high-resolution images.
no code implementations • 6 Dec 2023 • Ali Naseh, Jaechul Roh, Amir Houmansadr
Multimodal machine learning, especially text-to-image models like Stable Diffusion and DALL-E 3, has gained significance for transforming text into detailed images.
1 code implementation • 29 Oct 2023 • Dzung Pham, Shreyas Kulkarni, Amir Houmansadr
We introduce RAIFLE, a novel optimization-based attack framework where the server actively manipulates the features of the items presented to users to increase the success rate of reconstruction.
1 code implementation • 18 Sep 2023 • Alireza Bahramali, Ardavan Bozorgi, Amir Houmansadr
Our extensive open-world and close-world experiments demonstrate that under practical evaluation settings, our WF attacks provide superior performances compared to the state-of-the-art; this is due to their use of augmented network traces for training, which allows them to learn the features of target traffic in unobserved settings.
1 code implementation • 8 Mar 2023 • Ali Naseh, Kalpesh Krishna, Mohit Iyyer, Amir Houmansadr
A key component of generating text from modern language models (LM) is the selection and tuning of decoding algorithms.
no code implementations • 4 Dec 2022 • Momin Ahmad Khan, Virat Shejwalkar, Amir Houmansadr, Fatima Muhammad Anwar
We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality.
no code implementations • 20 May 2022 • Hamid Mozaffari, Amir Houmansadr
Federated Learning (FL) enables data owners to train a shared global model without sharing their private data.
no code implementations • 15 Oct 2021 • Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal
The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility.
no code implementations • 8 Oct 2021 • Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr
The FRL server uses a voting mechanism to aggregate the parameter rankings submitted by clients in each training epoch to generate the global ranking of the next training epoch.
no code implementations • 29 Sep 2021 • Hamid Mozaffari, Virat Shejwalkar, Amir Houmansadr
FSL clients share local subnetworks in the form of rankings of network edges; more useful edges have higher ranks.
1 code implementation • 23 Aug 2021 • Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage
While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.
no code implementations • 1 Feb 2021 • Alireza Bahramali, Milad Nasr, Amir Houmansadr, Dennis Goeckel, Don Towsley
We show that in the presence of defense mechanisms deployed by the communicating parties, our attack performs significantly better compared to existing attacks against DNN-based wireless systems.
Adversarial Attack
Cryptography and Security
no code implementations • 22 Jul 2020 • Milad Nasr, Reza Shokri, Amir Houmansadr
We show that our mechanism outperforms the state-of-the-art DPSGD; for instance, for the same model accuracy of $96. 1\%$ on MNIST, our technique results in a privacy bound of $\epsilon=3. 2$ compared to $\epsilon=6$ of DPSGD, which is a significant improvement.
1 code implementation • 16 Feb 2020 • Milad Nasr, Alireza Bahramali, Amir Houmansadr
Deep Neural Networks (DNNs) are commonly used for various traffic analysis problems, such as website fingerprinting and flow correlation, as they outperform traditional (e. g., statistical) techniques by large margins.
no code implementations • 24 Dec 2019 • Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr
Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models.
no code implementations • 15 Jun 2019 • Virat Shejwalkar, Amir Houmansadr
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset.
4 code implementations • 3 Dec 2018 • Milad Nasr, Reza Shokri, Amir Houmansadr
Deep neural networks are susceptible to various inference attacks as they remember information about their training data.
no code implementations • 22 Aug 2018 • Milad Nasr, Alireza Bahramali, Amir Houmansadr
Flow correlation is the core technique used in a multitude of deanonymization attacks on Tor.
1 code implementation • 16 Jul 2018 • Milad Nasr, Reza Shokri, Amir Houmansadr
In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters.
no code implementations • 30 Sep 2017 • Nazanin Takbiri, Amir Houmansadr, Dennis L. Goeckel, Hossein Pishro-Nik
Here we derive the fundamental limits of user privacy when both anonymization and obfuscation-based protection mechanisms are applied to users' time series of data.
Information Theory Cryptography and Security Information Theory
1 code implementation • 14 Nov 2012 • Amir Houmansadr, Wenxuan Zhou, Matthew Caesar, Nikita Borisov
As the operation of SWEET is not bound to specific email providers we argue that a censor will need to block all email communications in order to disrupt SWEET, which is infeasible as email constitutes an important part of today's Internet.
Cryptography and Security