Search Results for author: Aditya Kanade

Found 13 papers, 7 papers with code

Stateful Detection of Model Extraction Attacks

1 code implementation12 Jul 2021 Soham Pal, Yash Gupta, Aditya Kanade, Shirish Shevade

Machine-Learning-as-a-Service providers expose machine learning (ML) models through application programming interfaces (APIs) to developers.

Model extraction

ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public Data

1 code implementation7 Feb 2020 Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish Shevade, Vinod Ganapathy

We demonstrate that (1) it is possible to use ACTIVETHIEF to extract deep classifiers trained on a variety of datasets from image and text domains, while querying the model with as few as 10-30% of samples from public datasets, (2) the resulting model exhibits a higher transferability success rate of adversarial examples than prior work, and (3) the attack evades detection by the state-of-the-art model extraction detection method, PRADA.

Active Learning Model extraction

Learning and Evaluating Contextual Embedding of Source Code

2 code implementations ICML 2020 Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, Kensen Shi

We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.

Contextual Embedding for Source Code Exception type +7

Neural Attribution for Semantic Bug-Localization in Student Programs

1 code implementation NeurIPS 2019 Rahul Gupta, Aditya Kanade, Shirish Shevade

In this work, we present NeuralBugLocator, a deep learning based technique, that can localize the bugs in a faulty program with respect to a failing test, without even running the program.

Fault localization

Pre-trained Contextual Embedding of Source Code

no code implementations25 Sep 2019 Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, Kensen Shi

A major advancement in natural-language understanding has been the use of pre-trained token embeddings; BERT and other works have further shown that pre-trained contextual embeddings can be extremely powerful and can be finetuned effectively for a variety of downstream supervised tasks.

Language understanding Natural Language Understanding

Scalable Neural Learning for Verifiable Consistency with Temporal Specifications

no code implementations25 Sep 2019 Sumanth Dathathri, Johannes Welbl, Krishnamurthy (Dj) Dvijotham, Ramana Kumar, Aditya Kanade, Jonathan Uesato, Sven Gowal, Po-Sen Huang, Pushmeet Kohli

Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features.

Adversarial Robustness Language Modelling

Deep Learning for Bug-Localization in Student Programs

no code implementations28 May 2019 Rahul Gupta, Aditya Kanade, Shirish Shevade

To localize the bugs, we analyze the trained network using a state-of-the-art neural prediction attribution technique and see which lines of the programs make it predict the test outcomes.

Neural Program Repair by Jointly Learning to Localize and Repair

2 code implementations ICLR 2019 Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh

We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.

Variable misuse

Greybox fuzzing as a contextual bandits problem

no code implementations11 Jun 2018 Ketan Patil, Aditya Kanade

AFL performs extremely well in fuzz testing large applications and finding critical vulnerabilities, but AFL involves a lot of heuristics while deciding the favored test case(s), skipping test cases during fuzzing, assigning fuzzing iterations to test case(s).

Multi-Armed Bandits

Active Learning for Efficient Testing of Student Programs

no code implementations13 Apr 2018 Ishan Rastogi, Aditya Kanade, Shirish Shevade

In this work, we propose an automated method to identify semantic bugs in student programs, called ATAS, which builds upon the recent advances in both symbolic execution and active learning.

Active Learning

DeepFix: Fixing Common C Language Errors by Deep Learning

1 code implementation4 Feb 2017 Rahul Gupta, Soham Pal, Aditya Kanade, Shirish Shevade

The problem of automatically fixing programming errors is a very active research topic in software engineering.

Program Repair

Cannot find the paper you are looking for? You can Submit a new open access paper.