Search Results for author: Miryung Kim

Found 4 papers, 4 papers with code

Human-in-the-Loop Synthetic Text Data Inspection with Provenance Tracking

1 code implementation29 Apr 2024 Hong Jin Kang, Fabrice Harel-Canada, Muhammad Ali Gulzar, Violet Peng, Miryung Kim

INSPECTOR allows users to group related texts by their transformation provenance, i. e., the transformations applied to the original text, or feature provenance, the linguistic features of the original text.

Data Augmentation Hate Speech Detection +3

Sibylvariant Transformations for Robust Text Classification

1 code implementation Findings (ACL) 2022 Fabrice Harel-Canada, Muhammad Ali Gulzar, Nanyun Peng, Miryung Kim

The vast majority of text transformation techniques in NLP are inherently limited in their ability to expand input space coverage due to an implicit constraint to preserve the original class label.

Adversarial Robustness Defect Detection +2

Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

1 code implementation24 May 2021 John Thorpe, Yifan Qiao, Jonathan Eyolfson, Shen Teng, Guanzhou Hu, Zhihao Jia, Jinliang Wei, Keval Vora, Ravi Netravali, Miryung Kim, Guoqing Harry Xu

Computation separation makes it possible to construct a deep, bounded-asynchronous pipeline where graph and tensor parallel tasks can fully overlap, effectively hiding the network latency incurred by Lambdas.

An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models

1 code implementation6 Feb 2020 Yao Deng, Xi Zheng, Tianyi Zhang, Chen Chen, Guannan Lou, Miryung Kim

We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a blackbox attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e. g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.

Autonomous Driving

Cannot find the paper you are looking for? You can Submit a new open access paper.