Search Results for author: Varun Kumar

Found 20 papers, 5 papers with code

MyCrunchGPT: A chatGPT assisted framework for scientific machine learning

no code implementations27 Jun 2023 Varun Kumar, Leonard Gleyzer, Adar Kahana, Khemraj Shukla, George Em Karniadakis

To demonstrate the flow of the MyCrunchGPT, and create an infrastructure that can facilitate a broader vision, we built a webapp based guided user interface, that includes options for a comprehensive summary report.

Code Generation Geophysics

Real-Time Prediction of Gas Flow Dynamics in Diesel Engines using a Deep Neural Operator Framework

no code implementations2 Apr 2023 Varun Kumar, Somdatta Goswami, Daniel J. Smith, George Em Karniadakis

As an alternative to physics based models, we develop an operator-based regression model (DeepONet) to learn the relevant output states for a mean-value gas flow engine model using the engine operating conditions as input variables.

ReCode: Robustness Evaluation of Code Generation Models

2 code implementations20 Dec 2022 Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang

Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.

Code Generation

Multi-lingual Evaluation of Code Generation Models

1 code implementation26 Oct 2022 Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang

Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.

Code Completion Code Translation +1

An Analysis of the Effects of Decoding Algorithms on Fairness in Open-Ended Language Generation

no code implementations7 Oct 2022 Jwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, Aram Galstyan

We present a systematic analysis of the impact of decoding algorithms on LM fairness, and analyze the trade-off between fairness, diversity and quality.

Fairness Text Generation

Industry Scale Semi-Supervised Learning for Natural Language Understanding

no code implementations NAACL 2021 Luoxin Chen, Francisco Garcia, Varun Kumar, He Xie, Jianhua Lu

This paper presents a production Semi-Supervised Learning (SSL) pipeline based on the student-teacher framework, which leverages millions of unlabeled examples to improve Natural Language Understanding (NLU) tasks.

intent-classification Intent Classification +6

ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification

no code implementations28 Jan 2021 Manoj Kumar, Varun Kumar, Hadrien Glaude, Cyprien delichy, Aman Alok, Rahul Gupta

We make use of a conditional generator for data augmentation that is trained directly using the meta-learning objective and simultaneously with prototypical networks, hence ensuring that data augmentation is customized to the task.

Classification Data Augmentation +8

BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation

1 code implementation27 Jan 2021 Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta

To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23, 679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology.

Benchmarking Text Generation

Data Augmentation using Pre-trained Transformer Models

3 code implementations AACL (lifelongnlp) 2020 Varun Kumar, Ashutosh Choudhary, Eunah Cho

Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks.

Data Augmentation Language Modelling

A Closer Look At Feature Space Data Augmentation For Few-Shot Intent Classification

no code implementations WS 2019 Varun Kumar, Hadrien Glaude, Cyprien de Lichy, William Campbell

In particular, we show that (a) upsampling in latent space is a competitive baseline for feature space augmentation (b) adding the difference between two examples to a new example is a simple yet effective data augmentation method.

Data Augmentation General Classification +4

Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models

no code implementations ACL 2019 Varun Kumar, Alison Smith-Renner, Leah Findlater, Kevin Seppi, Jordan Boyd-Graber

To address the lack of comparative evaluation of Human-in-the-Loop Topic Modeling (HLTM) systems, we implement and evaluate three contrasting HLTM modeling approaches using simulation experiments.

Topic Models

SpaceNet MVOI: a Multi-View Overhead Imagery Dataset

no code implementations ICCV 2019 Nicholas Weir, David Lindenbaum, Alexei Bastidas, Adam Van Etten, Sean McPherson, Jacob Shermeyer, Varun Kumar, Hanlin Tang

To address this problem, we present an open source Multi-View Overhead Imagery dataset, termed SpaceNet MVOI, with 27 unique looks from a broad range of viewing angles (-32. 5 degrees to 54. 0 degrees).

object-detection Object Detection

Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning

1 code implementation24 Jan 2018 Scott Cyphers, Arjun K. Bansal, Anahita Bhiwandiwalla, Jayaram Bobba, Matthew Brookhart, Avijit Chakraborty, Will Constable, Christian Convey, Leona Cook, Omar Kanawi, Robert Kimball, Jason Knight, Nikolay Korovaiko, Varun Kumar, Yixing Lao, Christopher R. Lishka, Jaikrishnan Menon, Jennifer Myers, Sandeep Aswath Narayana, Adam Procter, Tristan J. Webb

The current approach, which we call "direct optimization", requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires $\mathcal{O}(fp)$ effort; where $f$ is the number of frameworks and $p$ is the number of platforms.

graph partitioning Management

Cannot find the paper you are looking for? You can Submit a new open access paper.