A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, existing B5G ML-security surveys tend to place more emphasis on AI/ML model performance and accuracy than on the models' accountability and trustworthiness. In contrast, this paper explores the potential of Explainable AI (XAI) methods, which would allow B5G stakeholders to inspect intelligent black-box systems used to secure B5G networks. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the ML-based security systems to be transparent and comprehensible to B5G stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods