A Case Study of Large Language Models (ChatGPT and CodeBERT) for Security-Oriented Code Analysis

24 Jul 2023  ·  Zhilong Wang, Lan Zhang, Chen Cao, Nanqing Luo, Peng Liu ·

LLMs can be used on code analysis tasks like code review, vulnerabilities analysis and etc. However, the strengths and limitations of adopting these LLMs to the code analysis are still unclear. In this paper, we delve into LLMs' capabilities in security-oriented program analysis, considering perspectives from both attackers and security analysts. We focus on two representative LLMs, ChatGPT and CodeBert, and evaluate their performance in solving typical analytic tasks with varying levels of difficulty. Our study demonstrates the LLM's efficiency in learning high-level semantics from code, positioning ChatGPT as a potential asset in security-oriented contexts. However, it is essential to acknowledge certain limitations, such as the heavy reliance on well-defined variable and function names, making them unable to learn from anonymized code. For example, the performance of these LLMs heavily relies on the well-defined variable and function names, therefore, will not be able to learn anonymized code. We believe that the concerns raised in this case study deserve in-depth investigation in the future.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Code Generation MBPP GPT-4 (ChatGPT Plus) Accuracy 87.5 # 3
Code Generation MBPP GPT-4 (Bing Chat) Accuracy 82 # 7
Code Generation MBPP Claude Accuracy 71.4 # 15
Code Generation MBPP GPT-3.5 Turbo (ChatGPT) Accuracy 83.2 # 5
Code Generation MBPP Bard (PaLM 2/chat-bison-001) Accuracy 76.2 # 13

Methods