Search Results for author: Siddhartha Jain

Found 8 papers, 4 papers with code

Code-Aware Prompting: A study of Coverage Guided Test Generation in Regression Setting using LLM

no code implementations31 Jan 2024 Gabriel Ryan, Siddhartha Jain, Mingyue Shang, Shiqi Wang, Xiaofei Ma, Murali Krishna Ramanathan, Baishakhi Ray

Recent works using large language models (LLMs) for test generation have focused on improving generation quality through optimizing the test generation context and correcting errors in model outputs, but use fixed prompting strategies that prompt the model to generate tests without additional guidance.

Lightweight reranking for language model generations

no code implementations11 Jul 2023 Siddhartha Jain, Xiaofei Ma, Anoop Deoras, Bing Xiang

We show strong improvements for selecting the best k generations for code generation tasks as well as robust improvements for the best generation for the tasks of autoformalization, summarization, and translation.

Code Generation Language Modelling

Multi-lingual Evaluation of Code Generation Models

2 code implementations26 Oct 2022 Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang

Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.

Code Completion Code Translation +1

Unambiguous DNFs and Alon-Saks-Seymour

no code implementations16 Feb 2021 Shalev Ben-David, Mika Göös, Siddhartha Jain, Robin Kothari

We exhibit an unambiguous k-DNF formula that requires CNF width $\tilde{\Omega}(k^2)$, which is optimal up to logarithmic factors.

Computational Complexity

Overinterpretation reveals image classification model pathologies

2 code implementations NeurIPS 2021 Brandon Carter, Siddhartha Jain, Jonas Mueller, David Gifford

Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets.

Classification General Classification +2

Information Condensing Active Learning

1 code implementation18 Feb 2020 Siddhartha Jain, Ge Liu, David Gifford

We introduce Information Condensing Active Learning (ICAL), a batch mode model agnostic Active Learning (AL) method targeted at Deep Bayesian Active Learning that focuses on acquiring labels for points which have as much information as possible about the still unacquired points.

Active Learning

Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles

no code implementations18 Jun 2019 Siddhartha Jain, Ge Liu, Jonas Mueller, David Gifford

The inaccuracy of neural network models on inputs that do not stem from the training data distribution is both problematic and at times unrecognized.

Bayesian Optimization

What made you do this? Understanding black-box decisions with sufficient input subsets

1 code implementation9 Oct 2018 Brandon Carter, Jonas Mueller, Siddhartha Jain, David Gifford

Local explanation frameworks aim to rationalize particular decisions made by a black-box prediction model.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.