Cellular networks (LTE, 5G, and beyond) are dramatically growing with high demand from consumers and more promising than the other wireless networks with advanced telecommunication technologies.
Over the last few decades, extensive use of information and communication technologies has been the main driver of the digitalization of power systems.
The key motivation of NextG networks is to meet the high demand for those applications by improving and optimizing network functions.
In this study, the proposed algorithm using a real-world medical dataset is evaluated in terms of the model performance.
This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks (DNNs) in 6G wireless networks, which treats the beamforming prediction as a multi-output regression problem.
In this study, we show both mathematically and experimentally that using some widely known activation functions in the output layer of the model with high temperature values has the effect of zeroing out the gradients for both targeted and untargeted attack cases, preventing attackers from exploiting the model's loss function to craft adversarial samples.
Recently, many innovations have been experienced in healthcare by rapidly growing Internet-of-Things (IoT) technology that provides significant developments and facilities in the health sector and improves daily human life.
Object detection in autonomous cars is commonly based on camera images and Lidar inputs, which are often used to train prediction models such as deep artificial neural networks for decision making for object recognition, adjusting speed, etc.
We also present the adversarial learning mitigation method's performance for 6G security in mmWave beam prediction application with fast gradient sign method attack.
This paper has proposed a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction with adversarial learning.
Deep neural network architectures are considered to be robust to random perturbations.
During the last two decades, distributed energy systems, especially renewable energy sources (RES), have become more economically viable with increasing market share and penetration levels on power systems.
While state-of-the-art Deep Neural Network (DNN) models are considered to be robust to random perturbations, it was shown that these architectures are highly vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible.
The main contributions of the paper's model structure consist of three components, including image generation from malware samples, image augmentation, and the last one is classifying the malware families by using a convolutional neural network model.
The use of operating system API calls is a promising task in the detection of PE-type malware in the Windows operating system.
Cryptography and Security