1 code implementation • 26 Apr 2024 • Triet H. M. Le, M. Ali Babar, Tung Hoang Thai
Aims: We conduct an empirical study to evaluate the impact of SV data scarcity in emerging languages on the state-of-the-art SV prediction model and investigate potential solutions to enhance the performance.
1 code implementation • 20 Jan 2024 • Triet H. M. Le, Xiaoning Du, M. Ali Babar
To bridge these gaps, we conduct a large-scale study on the latent vulnerable functions in two commonly used SV datasets and their utilization for function-level and line-level SV predictions.
no code implementations • 11 Dec 2023 • Sangwon Hyun, Mingyu Guo, M. Ali Babar
Through the experiments conducted with three prominent LLMs, we have confirmed that the METAL framework effectively evaluates essential QAs on primary LLM tasks and reveals the quality risks in LLMs.
no code implementations • 3 Jul 2023 • Bushra Sabir, M. Ali Babar, Sharif Abuadbba
It focuses on interpretability and transparency in detecting and transforming textual adversarial examples.
1 code implementation • 16 Mar 2022 • Triet H. M. Le, M. Ali Babar
We show that vulnerable statements are 5. 8 times smaller in size, yet exhibit 7. 5-114. 5% stronger assessment performance (Matthews Correlation Coefficient (MCC)) than non-vulnerable statements.
no code implementations • 9 Sep 2021 • Xuanyu Duan, Mengmeng Ge, Triet H. M. Le, Faheem Ullah, Shang Gao, Xuequan Lu, M. Ali Babar
This security model automatically assesses the security of the IoT network by capturing potential attack paths.
1 code implementation • 18 Aug 2021 • Triet H. M. Le, David Hin, Roland Croft, M. Ali Babar
It is increasingly suggested to identify Software Vulnerabilities (SVs) in code commits to give early warnings about potential security risks.
1 code implementation • 18 Jul 2021 • Triet H. M. Le, Huaming Chen, M. Ali Babar
Software Vulnerabilities (SVs) are increasing in complexity and scale, posing great security risks to many software systems.
no code implementations • NAACL 2021 • Bushra Sabir, M. Ali Babar, Raj Gaire
Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models.
no code implementations • 25 Jan 2021 • Bakheet Aljedaani, Aakash Ahmad, Mansooreh Zahedi, M. Ali Babar
Findings indicate that majority of the end-users are aware of the existing security features provided by the apps (e. g., restricted app permissions); however, they desire usable security (e. g., biometric authentication) and are concerned about privacy of their health information (e. g., data anonymization).
Cryptography and Security Software Engineering
no code implementations • 17 Dec 2020 • Bushra Sabir, Faheem Ullah, M. Ali Babar, Raj Gaire
Objective: This paper aims at systematically reviewing ML-based data exfiltration countermeasures to identify and classify ML approaches, feature engineering techniques, evaluation datasets, and performance metrics used for these countermeasures.
no code implementations • 11 Aug 2020 • Mubin Ul Haque, Leonardo Horn Iwaya, M. Ali Babar
As a fast-growing technology, it is important to identify the Docker-related topics that are most popular as well as existing challenges and difficulties that developers face.
1 code implementation • 18 May 2020 • Bushra Sabir, M. Ali Babar, Raj Gaire, Alsharif Abuadbba
Therefore, the security vulnerabilities of these systems, in general, remain primarily unknown which calls for testing the robustness of these systems.
1 code implementation • 8 Mar 2020 • Triet H. M. Le, David Hin, Roland Croft, M. Ali Babar
Using PUMiner, we provide the largest and up-to-date security content on Q&A websites for practitioners and researchers.
1 code implementation • 13 Feb 2020 • Triet H. M. Le, Hao Chen, M. Ali Babar
Deep Learning (DL) techniques for Natural Language Processing have been evolving remarkably fast.