Search Results for author: Aditya Kanade

Found 22 papers, 10 papers with code

NoFunEval: Funny How Code LMs Falter on Requirements Beyond Functional Correctness

no code implementations29 Jan 2024 Manav Singhal, Tushar Aggarwal, Abhijeet Awasthi, Nagarajan Natarajan, Aditya Kanade

We propose a new benchmark NoFunEval to evaluate code LMs on non-functional requirements and simple classification instances for both functional and non-functional requirements.

Frustrated with Code Quality Issues? LLMs can Help!

no code implementations22 Sep 2023 Nalin Wadhwa, Jui Pradhan, Atharv Sonwane, Surya Prakash Sahu, Nagarajan Natarajan, Aditya Kanade, Suresh Parthasarathy, Sriram Rajamani

We present a tool, CORE (short for COde REvisions), architected using a pair of LLMs organized as a duo comprised of a proposer and a ranker.

Instruction Following Program Repair

GrACE: Generation using Associated Code Edits

no code implementations23 May 2023 Priyanshu Gupta, Avishree Khare, Yasharth Bajpai, Saikat Chakraborty, Sumit Gulwani, Aditya Kanade, Arjun Radhakrishna, Gustavo Soares, Ashish Tiwari

In our experiments with two datasets, the knowledge of prior edits boosts the performance of the LLMs significantly and enables them to generate 29% and 54% more correctly edited code in top-1 suggestions relative to the current state-of-the-art symbolic and neural approaches, respectively.

Bug fixing Code Generation

BNSynth: Bounded Boolean Functional Synthesis

1 code implementation15 Dec 2022 Ravi Raja, Stanly Samuel, Chiranjib Bhattacharyya, Deepak D'Souza, Aditya Kanade

In this paper, we introduce a tool BNSynth, that is the first to solve the BFS problem under a given bound on the solution space.

CodeQueries: A Dataset of Semantic Queries over Code

1 code implementation17 Sep 2022 Surya Prakash Sahu, Madhurima Mandal, Shikhar Bharadwaj, Aditya Kanade, Petros Maniatis, Shirish Shevade

Compared to the existing datasets, in CodeQueries, the queries are about code semantics, the context is file level and the answers are code spans.

Attribute Extractive Question-Answering +3

A Robust and Scalable Attention Guided Deep Learning Framework for Movement Quality Assessment

no code implementations16 Apr 2022 Aditya Kanade, Mansi Sharma, Manivannan Muniyandi

Four novel feature extractors are proposed and studied that allow the transformer network to operate on skeletal data.

Data Augmentation

Tele-EvalNet: A Low-cost, Teleconsultation System for Home based Rehabilitation of Stroke Survivors using Multiscale CNN-LSTM Architecture

no code implementations6 Dec 2021 Aditya Kanade, Mansi Sharma, M. Manivannan

We propose Tele-EvalNet, a novel system consisting of two components: a live feedback model and an overall performance evaluation model.

Stateful Detection of Model Extraction Attacks

1 code implementation12 Jul 2021 Soham Pal, Yash Gupta, Aditya Kanade, Shirish Shevade

Machine-Learning-as-a-Service providers expose machine learning (ML) models through application programming interfaces (APIs) to developers.

BIG-bench Machine Learning Model extraction

ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public Data

1 code implementation7 Feb 2020 Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish Shevade, Vinod Ganapathy

We demonstrate that (1) it is possible to use ACTIVETHIEF to extract deep classifiers trained on a variety of datasets from image and text domains, while querying the model with as few as 10-30% of samples from public datasets, (2) the resulting model exhibits a higher transferability success rate of adversarial examples than prior work, and (3) the attack evades detection by the state-of-the-art model extraction detection method, PRADA.

Active Learning BIG-bench Machine Learning +1

Learning and Evaluating Contextual Embedding of Source Code

2 code implementations ICML 2020 Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, Kensen Shi

We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.

Contextual Embedding for Source Code Exception type +5

Neural Attribution for Semantic Bug-Localization in Student Programs

1 code implementation NeurIPS 2019 Rahul Gupta, Aditya Kanade, Shirish Shevade

In this work, we present NeuralBugLocator, a deep learning based technique, that can localize the bugs in a faulty program with respect to a failing test, without even running the program.

Fault localization

Scalable Neural Learning for Verifiable Consistency with Temporal Specifications

no code implementations25 Sep 2019 Sumanth Dathathri, Johannes Welbl, Krishnamurthy (Dj) Dvijotham, Ramana Kumar, Aditya Kanade, Jonathan Uesato, Sven Gowal, Po-Sen Huang, Pushmeet Kohli

Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features.

Adversarial Robustness Language Modelling

Pre-trained Contextual Embedding of Source Code

no code implementations25 Sep 2019 Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, Kensen Shi

A major advancement in natural-language understanding has been the use of pre-trained token embeddings; BERT and other works have further shown that pre-trained contextual embeddings can be extremely powerful and can be finetuned effectively for a variety of downstream supervised tasks.

Natural Language Understanding

Deep Learning for Bug-Localization in Student Programs

no code implementations28 May 2019 Rahul Gupta, Aditya Kanade, Shirish Shevade

To localize the bugs, we analyze the trained network using a state-of-the-art neural prediction attribution technique and see which lines of the programs make it predict the test outcomes.

Neural Program Repair by Jointly Learning to Localize and Repair

2 code implementations ICLR 2019 Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh

We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.

Variable misuse

Greybox fuzzing as a contextual bandits problem

no code implementations11 Jun 2018 Ketan Patil, Aditya Kanade

AFL performs extremely well in fuzz testing large applications and finding critical vulnerabilities, but AFL involves a lot of heuristics while deciding the favored test case(s), skipping test cases during fuzzing, assigning fuzzing iterations to test case(s).

Multi-Armed Bandits

Active Learning for Efficient Testing of Student Programs

no code implementations13 Apr 2018 Ishan Rastogi, Aditya Kanade, Shirish Shevade

In this work, we propose an automated method to identify semantic bugs in student programs, called ATAS, which builds upon the recent advances in both symbolic execution and active learning.

Active Learning

DeepFix: Fixing Common C Language Errors by Deep Learning

1 code implementation4 Feb 2017 Rahul Gupta, Soham Pal, Aditya Kanade, Shirish Shevade

The problem of automatically fixing programming errors is a very active research topic in software engineering.

Program Repair

Cannot find the paper you are looking for? You can Submit a new open access paper.