1 code implementation • 27 Mar 2024 • Yangruibo Ding, Yanjun Fu, Omniyyah Ibrahim, Chawin Sitawarin, Xinyun Chen, Basel Alomair, David Wagner, Baishakhi Ray, Yizheng Chen
Evaluating code LMs on PrimeVul reveals that existing benchmarks significantly overestimate the performance of these models.
no code implementations • 29 Jan 2024 • Yizheng Chen, Rengan Xie, Qi Ye, Sen yang, Zixuan Xie, Tianxiao Chen, Rong Li, Yuchi Huo
Specifically, we first leverage to decouple the shading information from the generated images to reduce the impact of inconsistent lighting; then, we introduce mono prior with view-dependent transient encoding to enhance the reconstructed normal; and finally, we design a view augmentation fusion strategy that minimizes pixel-level loss in generated sparse views and semantic loss in augmented random views, resulting in view-consistent geometry and detailed textures.
1 code implementation • 1 Apr 2023 • Yizheng Chen, Zhoujie Ding, Lamya Alowain, Xinyun Chen, David Wagner
Combining our new dataset with previous datasets, we present an analysis of the challenges and promising research directions of using deep learning for detecting software vulnerabilities.
2 code implementations • 8 Feb 2023 • Yizheng Chen, Zhoujie Ding, David Wagner
We propose a new hierarchical contrastive learning scheme, and a new sample selection technique to continuously train the Android malware classifier.
1 code implementation • 15 Sep 2022 • Chawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, David Wagner
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks by introducing a part-based model for object classification.
1 code implementation • 24 May 2021 • Yizheng Chen, Shiqi Wang, Yue Qin, Xiaojing Liao, Suman Jana, David Wagner
Since data distribution shift is very common in security applications, e. g., often observed for malware detection, local robustness cannot guarantee that the property holds for unseen inputs at the time of deploying the classifier.
2 code implementations • 3 Dec 2019 • Yizheng Chen, Shiqi Wang, Weifan Jiang, Asaf Cidon, Suman Jana
There are various costs for attackers to manipulate the features of security classifiers.
no code implementations • 8 Jul 2019 • Dongdong She, Yizheng Chen, Baishakhi Ray, Suman Jana
Dynamic taint analysis (DTA) is widely used by various applications to track information flow during runtime execution.
Cryptography and Security
no code implementations • 5 Jun 2019 • Shiqi Wang, Yizheng Chen, Ahmed Abdou, Suman Jana
In this paper, we present interval attacks, a new technique to find adversarial examples to evaluate the robustness of neural networks.
1 code implementation • 6 Apr 2019 • Yizheng Chen, Shiqi Wang, Dongdong She, Suman Jana
A practically useful malware classifier must be robust against evasion attacks.
1 code implementation • 6 Nov 2018 • Shiqi Wang, Yizheng Chen, Ahmed Abdou, Suman Jana
Making neural networks robust against adversarial inputs has resulted in an arms race between new defenses and attacks.
no code implementations • 29 Aug 2017 • Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, Nikolaos Vasiloglou
Graph modeling allows numerous security problems to be tackled in a general way, however, little work has been done to understand their ability to withstand adversarial attacks.