no code implementations • 21 Feb 2025 • Pedram Zaree, Md Abdullah Al Mamun, Quazi Mishkatul Alam, Yue Dong, Ihsen Alouani, Nael Abu-Ghazaleh
Recent research has shown that carefully crafted jailbreak inputs can induce large language models to produce harmful outputs, despite safety measures such as alignment.
no code implementations • 30 Nov 2023 • Bilel Tarchoun, Quazi Mishkatul Alam, Nael Abu-Ghazaleh, Ihsen Alouani
Adversarial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios.
no code implementations • 21 Nov 2023 • Quazi Mishkatul Alam, Bilel Tarchoun, Ihsen Alouani, Nael Abu-Ghazaleh
The latest generation of transformer-based vision models has proven to be superior to Convolutional Neural Network (CNN)-based models across several vision tasks, largely attributed to their remarkable prowess in relation modeling.
no code implementations • 22 Jul 2023 • Quazi Mishkatul Alam, Israat Haque, Nael Abu-Ghazaleh
In this paper, we introduce LtC, a collaborative framework between the video source and the analytics server, that efficiently learns to reduce the video streams within an analytics pipeline.
no code implementations • 17 Jul 2023 • Md Abdullah Al Mamun, Quazi Mishkatul Alam, Erfan Shayegani, Pedram Zaree, Ihsen Alouani, Nael Abu-Ghazaleh
In this paper, we propose a new information theoretic perspective of the problem; we consider the ML model as a storage channel with a capacity that increases with overparameterization.