no code implementations • 11 Nov 2024 • Bacui Li, Tansu Alpcan, Chandra Thapa, Udaya Parampalli
However, machine learning models are well-documented to be vulnerable to malicious manipulations, and this susceptibility extends to the models of QML.
no code implementations • 25 Sep 2024 • Hevish Cowlessur, Chandra Thapa, Tansu Alpcan, Seyit Camtepe
HQSL enables classical clients to train models with a hybrid quantum server and curtails reconstruction attacks.
no code implementations • 17 Sep 2024 • Wei Shao, Chandra Thapa, Rayne Holland, Sarah Ali Siddiqui, Seyit Camtepe
In this paper, we introduce a reinforcement learning-based side-channel cache attack framework specifically designed for network slicing environments.
1 code implementation • 5 Aug 2024 • William Holland, Chandra Thapa, Sarah Ali Siddiqui, Wei Shao, Seyit Camtepe
Thus, high-fidelity distilled data can support the efficient deployment of machine learning applications in distributed network environments.
no code implementations • 4 Jun 2024 • Wei Shao, Rongyi Zhu, Cai Yang, Chandra Thapa, Muhammad Ejaz Ahmed, Seyit Camtepe, Rui Zhang, Duyong Kim, Hamid Menouar, Flora D. Salim
Spatiotemporal data is prevalent in a wide range of edge devices, such as those used in personal communication and financial transactions.
1 code implementation • 15 Dec 2023 • Falih Gozi Febrinanto, Kristen Moore, Chandra Thapa, Mujie Liu, Vidya Saikrishna, Jiangang Ma, Feng Xia
Many multivariate time series anomaly detection frameworks have been proposed and widely applied.
no code implementations • 13 Dec 2023 • Yanqiu Wu, Eromanga Adermann, Chandra Thapa, Seyit Camtepe, Hajime Suzuki, Muhammad Usman
Our extensive simulation results present that attacks generated on QVCs transfer well to CNN models, indicating that these adversarial examples can fool neural networks that they are not explicitly designed to attack.
no code implementations • 25 Jul 2023 • Praveen Joshi, Chandra Thapa, Mohammed Hasanuzzaman, Ted Scully, Haithem Afli
Among various techniques in a DCML framework, federated split learning, known as splitfed learning (SFL), is the most suitable for efficient training and testing when devices have limited computational capabilities.
no code implementations • 18 Jul 2023 • Anahita Namvar, Chandra Thapa, Salil S. Kanhere
IoT device identification is the process of recognizing and verifying connected IoT devices to the network.
no code implementations • 3 Feb 2023 • Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao
In a number of practical scenarios, VFL is more relevant than HFL as different companies (e. g., bank and retailer) hold different features (e. g., credit history and shopping history) for the same set of customers.
no code implementations • 7 Apr 2022 • Praveen Joshi, Mohammed Hasanuzzaman, Chandra Thapa, Haithem Afli, Ted Scully
Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers.
no code implementations • 7 Apr 2022 • Chandra Thapa, Seung Ick Jang, Muhammad Ejaz Ahmed, Seyit Camtepe, Josef Pieprzyk, Surya Nepal
The large transformer-based language models demonstrate excellent performance in natural language processing.
no code implementations • 22 Feb 2022 • Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal
Lifelong learning methods that enable continuous learning in regular domains like images and text cannot be directly applied to continuously evolving graph data, due to its irregular structure.
no code implementations • 19 Sep 2021 • Praveen Joshi, Chandra Thapa, Seyit Camtepe, Mohammed Hasanuzzamana, Ted Scully, Haithem Afli
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning that are gaining attention due to their ability to preserve the privacy of raw data.
no code implementations • 9 Jun 2021 • Chandra Thapa, Kallol Krishna Karmakar, Alberto Huertas Celdran, Seyit Camtepe, Vijay Varadharajan, Surya Nepal
FedDICE integrates federated learning (FL), which is privacy-preserving learning, to SDN-oriented security architecture to enable collaborative learning, detection, and mitigation of ransomware attacks.
1 code implementation • 3 Mar 2021 • Yansong Gao, Minki Kim, Chandra Thapa, Sharif Abuadbba, Zhi Zhang, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices.
no code implementations • 25 Nov 2020 • Chandra Thapa, M. A. P. Chamikara, Seyit A. Camtepe
In practical scenarios, all clients do not have sufficient computing resources (e. g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients while training/testing is a prime concern (e. g., rival parties).
no code implementations • 24 Aug 2020 • Chandra Thapa, Seyit Camtepe
Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects.
no code implementations • 27 Jul 2020 • Chandra Thapa, Jun Wen Tang, Alsharif Abuadbba, Yansong Gao, Seyit Camtepe, Surya Nepal, Mahathir Almashor, Yifeng Zheng
For a fixed total email dataset, the global RNN based model suffers by a 1. 8% accuracy drop when increasing organizational counts from 2 to 10.
2 code implementations • 25 Apr 2020 • Chandra Thapa, M. A. P. Chamikara, Seyit Camtepe, Lichao Sun
SL provides better model privacy than FL due to the machine learning model architecture split between clients and the server.
1 code implementation • 30 Mar 2020 • Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal
For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data.
1 code implementation • 16 Mar 2020 • Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal
We observed that the 1D CNN model under split learning can achieve the same accuracy of 98. 9\% like the original (non-split) model.