Search Results for author: Junlin Wang

Found 8 papers, 4 papers with code

LLM-Resistant Math Word Problem Generation via Adversarial Attacks

1 code implementation27 Feb 2024 Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra

Large language models (LLMs) have significantly transformed the educational landscape.

Math

Maestro: A Gamified Platform for Teaching AI Robustness

no code implementations14 Jun 2023 Margarita Geleta, Jiacen Xu, Manikanta Loya, Junlin Wang, Sameer Singh, Zhou Li, Sergio Gago-Masague

We assessed Maestro's influence on students' engagement, motivation, and learning success in robust AI.

Active Learning

NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge

1 code implementation8 May 2023 Phillip Howard, Junlin Wang, Vasudev Lal, Gadi Singer, Yejin Choi, Swabha Swayamdipta

We introduce NeuroComparatives, a novel framework for comparative knowledge distillation overgenerated from language models such as GPT-variants and LLaMA, followed by stringent filtering of the generated knowledge.

Knowledge Distillation valid +1

GAP-Gen: Guided Automatic Python Code Generation

1 code implementation19 Jan 2022 Junchen Zhao, Yurun Song, Junlin Wang, Ian G. Harris

In this work, we propose GAP-Gen, a Guided Automatic Python Code Generation method based on Python syntactic constraints and semantic constraints.

Ranked #2 on Code Generation on CodeXGLUE - CodeSearchNet (using extra training data)

Code Generation

Gradient-based Analysis of NLP Models is Manipulable

no code implementations Findings of the Association for Computational Linguistics 2020 Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh

Gradient-based analysis methods, such as saliency map visualizations and adversarial input perturbations, have found widespread use in interpreting neural NLP models due to their simplicity, flexibility, and most importantly, their faithfulness.

text-classification Text Classification

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

1 code implementation IJCNLP 2019 Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh

Neural NLP models are increasingly accurate but are imperfect and opaque---they break in counterintuitive ways and leave end users puzzled at their behavior.

Language Modelling Masked Language Modeling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.