Search Results for author: Gustavo Soares

Found 10 papers, 3 papers with code

Exploring Interaction Patterns for Debugging: Enhancing Conversational Capabilities of AI-assistants

no code implementations9 Feb 2024 Bhavya Chopra, Yasharth Bajpai, Param Biyani, Gustavo Soares, Arjun Radhakrishna, Chris Parnin, Sumit Gulwani

The widespread availability of Large Language Models (LLMs) within Integrated Development Environments (IDEs) has led to their speedy adoption.

Fault localization

Generative AI for Programming Education: Benchmarking ChatGPT, GPT-4, and Human Tutors

no code implementations29 Jun 2023 Tung Phung, Victor-Alexandru Pădurean, José Cambronero, Sumit Gulwani, Tobias Kohn, Rupak Majumdar, Adish Singla, Gustavo Soares

In our work, we systematically evaluate two models, ChatGPT (based on GPT-3. 5) and GPT-4, and compare their performance with human tutors for a variety of scenarios.

Benchmarking

GrACE: Generation using Associated Code Edits

no code implementations23 May 2023 Priyanshu Gupta, Avishree Khare, Yasharth Bajpai, Saikat Chakraborty, Sumit Gulwani, Aditya Kanade, Arjun Radhakrishna, Gustavo Soares, Ashish Tiwari

In our experiments with two datasets, the knowledge of prior edits boosts the performance of the LLMs significantly and enables them to generate 29% and 54% more correctly edited code in top-1 suggestions relative to the current state-of-the-art symbolic and neural approaches, respectively.

Bug fixing Code Generation

Generating High-Precision Feedback for Programming Syntax Errors using Large Language Models

1 code implementation24 Jan 2023 Tung Phung, José Cambronero, Sumit Gulwani, Tobias Kohn, Rupak Majumdar, Adish Singla, Gustavo Soares

We investigate using LLMs to generate feedback for fixing syntax errors in Python programs, a key scenario in introductory programming.

Repairing Bugs in Python Assignments Using Large Language Models

no code implementations29 Sep 2022 Jialu Zhang, José Cambronero, Sumit Gulwani, Vu Le, Ruzica Piskac, Gustavo Soares, Gust Verbruggen

We propose to use a large language model trained on code, such as Codex, to build an APR system -- MMAPR -- for introductory Python programming assignments.

Chunking Language Modelling +2

Overwatch: Learning Patterns in Code Edit Sequences

no code implementations25 Jul 2022 Yuhao Zhang, Yasharth Bajpai, Priyanshu Gupta, Ameya Ketkar, Miltiadis Allamanis, Titus Barik, Sumit Gulwani, Arjun Radhakrishna, Mohammad Raza, Gustavo Soares, Ashish Tiwari

Our experiments show that Overwatch has 78% precision and that Overwatch not only completed edits when developers missed the opportunity to use the IDE tool support but also predicted new edits that have no tool support in the IDE.

Synchromesh: Reliable code generation from pre-trained language models

1 code implementation ICLR 2022 Gabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, Sumit Gulwani

Then, Synchromesh feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding (CSD): a general framework for constraining the output to a set of valid programs in the target language.

Code Generation Language Modelling +1

Multi-modal Program Inference: a Marriage of Pre-trainedLanguage Models and Component-based Synthesis

no code implementations3 Sep 2021 Kia Rahmani, Mohammad Raza, Sumit Gulwani, Vu Le, Daniel Morris, Arjun Radhakrishna, Gustavo Soares, Ashish Tiwari

Examples provide a precise but incomplete specification, and natural language provides an ambiguous but more "complete" task description.

Program Synthesis

Learning Syntactic Program Transformations from Examples

no code implementations31 Aug 2016 Reudismam Rolim, Gustavo Soares, Loris D'Antoni, Oleksandr Polozov, Sumit Gulwani, Rohit Gheyi, Ryo Suzuki, Bjoern Hartmann

In the second domain, we use repetitive edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code.

Cannot find the paper you are looking for? You can Submit a new open access paper.