no code implementations • Findings (EMNLP) 2021 • Joonghyuk Hahn, Hyunjoon Cheon, Kyuyeol Han, Cheongjae Lee, Junseok Kim, Yo-Sub Han
We propose to use rules of grammar in self-training as a more reliable pseudo-labeling mechanism, especially when there are few labeled data.
no code implementations • Findings (EMNLP) 2021 • HyeonTae Seo, Yo-Sub Han, Sang-Ki Ko
We consider the problem of learning to repair erroneous C programs by learning optimal alignments with correct programs.
1 code implementation • COLING 2022 • Jikyoeng Son, Joonghyuk Hahn, HyeonTae Seo, Yo-Sub Han
We find that a program dependency graph (PDG) can represent the structure of a code more effectively.
1 code implementation • COLING 2022 • Youngwook Kim, Shinwoo Park, Yo-Sub Han
However, it is challenging to identify implicit hate speech in nuance or context when there are insufficient lexical cues.
1 code implementation • 3 Jul 2024 • SeungYeop Baik, Sicheol Sung, Yo-Sub Han
We present a framework that provides a simple and intuitive way to build QFAs and maximize the simulation accuracy.
1 code implementation • EMNLP 2023 • JB. Kim, Hazel Kim, Joonghyuk Hahn, Yo-Sub Han
Solving math word problems depends on how to articulate the problems, the lens through which models view human linguistic expressions.
Ranked #1 on Math Word Problem Solving on ASDiv-A
no code implementations • 20 May 2022 • Su-Hyeon Kim, Hyunjoon Cheon, Yo-Sub Han, Sang-Ki Ko
We tackle the problem of learning regexes faster from positive and negative strings by relying on a novel approach called `neural example splitting'.
no code implementations • 5 Feb 2022 • Hazel Kim, Jaeman Son, Yo-Sub Han
Self-training provides an effective means of using an extremely small amount of labeled data to create pseudo-labels for unlabeled data.
no code implementations • 16 Dec 2021 • Hazel Kim, Daecheol Woo, Seong Joon Oh, Jeong-Won Cha, Yo-Sub Han
Taken together, our contributions on the data augmentation strategies yield a strong training recipe for few-shot text classification tasks.
no code implementations • 29 Sep 2021 • Su-Hyeon Kim, Hyunjoon Cheon, Yo-Sub Han, Sang-Ki Ko
SplitRegex is a divided-and-conquer framework for learning target regexes; split (=divide) positive strings and infer partial regexes for multiple parts, which is much more accurate than the whole string inferring, and concatenate (=conquer) inferred regexes while satisfying negative strings.
no code implementations • IJCNLP 2019 • Jun-U Park, Sang-Ki Ko, Marco Cognetta, Yo-Sub Han
We continue the study of generating se-mantically correct regular expressions from natural language descriptions (NL).
no code implementations • WS 2019 • Ju-Hyoung Lee, Jun-U Park, Jeong-Won Cha, Yo-Sub Han
Our model outperforms all the previous models for detecting abusiveness in a text data without abusive words.
no code implementations • ACL 2019 • Marco Cognetta, Yo-Sub Han, Soon Chan Kwon
Probabilistic finite automata (PFAs) are com- mon statistical language model in natural lan- guage and speech processing.
no code implementations • 4 Oct 2018 • Sang-Min Choi, Jiho Park, Quan Nguyen, Andre Cronje, Kiyoung Jang, Hyunjoon Cheon, Yo-Sub Han, Byung-Ik Ahn
Each event block is signed by the hashes of the creating node and its $k$ peers.
Distributed, Parallel, and Cluster Computing
no code implementations • EMNLP 2018 • Marco Cognetta, Yo-Sub Han, Soon Chan Kwon
The problem of computing infix probabilities of strings when the pattern distribution is given by a probabilistic context-free grammar or by a probabilistic finite automaton is already solved, yet it was open to compute the infix probabilities in an incremental manner.