Search Results for author: Anant Kharkar

Found 7 papers, 2 papers with code

Transformer-based Vulnerability Detection in Code at EditTime: Zero-shot, Few-shot, or Fine-tuning?

no code implementations23 May 2023 Aaron Chan, Anant Kharkar, Roshanak Zilouchian Moghaddam, Yevhen Mohylevskyy, Alec Helyar, Eslam Kamal, Mohamed Elkamhawy, Neel Sundaresan

We recognize that the current advances in machine learning can be used to detect vulnerable code patterns on syntactically incomplete code snippets as the developer is writing the code at EditTime.

Vulnerability Detection

TrojanPuzzle: Covertly Poisoning Code-Suggestion Models

1 code implementation6 Jan 2023 Hojjat Aghakhani, Wei Dai, Andre Manoel, Xavier Fernandes, Anant Kharkar, Christopher Kruegel, Giovanni Vigna, David Evans, Ben Zorn, Robert Sim

To achieve this, prior attacks explicitly inject the insecure code payload into the training data, making the poison data detectable by static analysis tools that can remove such malicious data from the training set.

Data Poisoning

Learning to Reduce False Positives in Analytic Bug Detectors

no code implementations8 Mar 2022 Anant Kharkar, Roshanak Zilouchian Moghaddam, Matthew Jin, Xiaoyu Liu, Xin Shi, Colin Clement, Neel Sundaresan

Due to increasingly complex software design and rapid iterative development, code defects and security vulnerabilities are prevalent in modern software.

Privileged Zero-Shot AutoML

no code implementations25 Jun 2021 Nikhil Singh, Brandon Kates, Jeff Mentch, Anant Kharkar, Madeleine Udell, Iddo Drori

This work improves the quality of automated machine learning (AutoML) systems by using dataset and function descriptions while significantly decreasing computation time from minutes to milliseconds by using a zero-shot approach.

AutoML BIG-bench Machine Learning +1

Real-Time AutoML

no code implementations1 Jan 2021 Iddo Drori, Brandon Kates, Anant Kharkar, Lu Liu, Qiang Ma, Jonah Deykin, Nihar Sidhu, Madeleine Udell

We train a graph neural network in which each node represents a dataset to predict the best machine learning pipeline for a new test dataset.

AutoML BIG-bench Machine Learning +1

Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning

4 code implementations arXiv 2018 Hyrum S. Anderson, Anant Kharkar, Bobby Filar, David Evans, Phil Roth

We show in experiments that our method can attack a gradient-boosted machine learning model with evasion rates that are substantial and appear to be strongly dependent on the dataset.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.