Search Results for author: Chandra Thapa

Found 17 papers, 4 papers with code

Radio Signal Classification by Adversarially Robust Quantum Machine Learning

no code implementations13 Dec 2023 Yanqiu Wu, Eromanga Adermann, Chandra Thapa, Seyit Camtepe, Hajime Suzuki, Muhammad Usman

Our extensive simulation results present that attacks generated on QVCs transfer well to CNN models, indicating that these adversarial examples can fool neural networks that they are not explicitly designed to attack.

Classification Image Classification +1

Federated Split Learning with Only Positive Labels for resource-constrained IoT environment

no code implementations25 Jul 2023 Praveen Joshi, Chandra Thapa, Mohammed Hasanuzzaman, Ted Scully, Haithem Afli

Among various techniques in a DCML framework, federated split learning, known as splitfed learning (SFL), is the most suitable for efficient training and testing when devices have limited computational capabilities.

Discretization-based ensemble model for robust learning in IoT

no code implementations18 Jul 2023 Anahita Namvar, Chandra Thapa, Salil S. Kanhere

IoT device identification is the process of recognizing and verifying connected IoT devices to the network.

Management

Vertical Federated Learning: Taxonomies, Threats, and Prospects

no code implementations3 Feb 2023 Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao

In a number of practical scenarios, VFL is more relevant than HFL as different companies (e. g., bank and retailer) hold different features (e. g., credit history and shopping history) for the same set of customers.

Vertical Federated Learning

Enabling All In-Edge Deep Learning: A Literature Review

no code implementations7 Apr 2022 Praveen Joshi, Mohammed Hasanuzzaman, Chandra Thapa, Haithem Afli, Ted Scully

Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers.

Edge-computing Model Compression +3

Graph Lifelong Learning: A Survey

no code implementations22 Feb 2022 Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, Charu Aggarwal

Lifelong learning methods that enable continuous learning in regular domains like images and text cannot be directly applied to continuously evolving graph data, due to its irregular structure.

Graph Learning Recommendation Systems

Splitfed learning without client-side synchronization: Analyzing client-side split network portion size to overall performance

no code implementations19 Sep 2021 Praveen Joshi, Chandra Thapa, Seyit Camtepe, Mohammed Hasanuzzamana, Ted Scully, Haithem Afli

Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning that are gaining attention due to their ability to preserve the privacy of raw data.

Federated Learning Image Classification +1

FedDICE: A ransomware spread detection in a distributed integrated clinical environment using federated learning and SDN based mitigation

no code implementations9 Jun 2021 Chandra Thapa, Kallol Krishna Karmakar, Alberto Huertas Celdran, Seyit Camtepe, Vijay Varadharajan, Surya Nepal

FedDICE integrates federated learning (FL), which is privacy-preserving learning, to SDN-oriented security architecture to enable collaborative learning, detection, and mitigation of ransomware attacks.

Federated Learning Privacy Preserving

Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things

1 code implementation3 Mar 2021 Yansong Gao, Minki Kim, Chandra Thapa, Sharif Abuadbba, Zhi Zhang, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal

Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices.

BIG-bench Machine Learning Federated Learning

Advancements of federated learning towards privacy preservation: from federated learning to split learning

no code implementations25 Nov 2020 Chandra Thapa, M. A. P. Chamikara, Seyit A. Camtepe

In practical scenarios, all clients do not have sufficient computing resources (e. g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients while training/testing is a prime concern (e. g., rival parties).

BIG-bench Machine Learning Federated Learning

Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy

no code implementations24 Aug 2020 Chandra Thapa, Seyit Camtepe

Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects.

BIG-bench Machine Learning Ethics +2

Evaluation of Federated Learning in Phishing Email Detection

no code implementations27 Jul 2020 Chandra Thapa, Jun Wen Tang, Alsharif Abuadbba, Yansong Gao, Seyit Camtepe, Surya Nepal, Mahathir Almashor, Yifeng Zheng

For a fixed total email dataset, the global RNN based model suffers by a 1. 8% accuracy drop when increasing organizational counts from 2 to 10.

Distributed Computing Federated Learning +2

SplitFed: When Federated Learning Meets Split Learning

2 code implementations25 Apr 2020 Chandra Thapa, M. A. P. Chamikara, Seyit Camtepe, Lichao Sun

SL provides better model privacy than FL due to the machine learning model architecture split between clients and the server.

BIG-bench Machine Learning Federated Learning

End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things

1 code implementation30 Mar 2020 Yansong Gao, Minki Kim, Sharif Abuadbba, Yeonjae Kim, Chandra Thapa, Kyuyeon Kim, Seyit A. Camtepe, Hyoungshick Kim, Surya Nepal

For learning performance, which is specified by the model accuracy and convergence speed metrics, we empirically evaluate both FL and SplitNN under different types of data distributions such as imbalanced and non-independent and identically distributed (non-IID) data.

Federated Learning

Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?

1 code implementation16 Mar 2020 Sharif Abuadbba, Kyuyeon Kim, Minki Kim, Chandra Thapa, Seyit A. Camtepe, Yansong Gao, Hyoungshick Kim, Surya Nepal

We observed that the 1D CNN model under split learning can achieve the same accuracy of 98. 9\% like the original (non-split) model.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.