no code implementations • 7 Feb 2024 • Ahmet Alacaoglu, Donghwan Kim, Stephen J. Wright
With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for $\rho < \frac{1}{L}$.
no code implementations • 7 Dec 2023 • Jae Hyung Ju, Jaiyoung Park, Jongmin Kim, Donghwan Kim, Jung Ho Ahn
NeuJeans accelerates the performance of conv2d by up to 5. 68 times compared to state-of-the-art FHE-based PI work and performs the PI of a CNN at the scale of ImageNet (ResNet18) within a mere few seconds
no code implementations • 25 May 2023 • Jiseok Chae, Kyuwon Kim, Donghwan Kim
Minimax problems are notoriously challenging to optimize.
no code implementations • 5 Feb 2023 • Donghwan Kim, Jaiyoung Park, Jongmin Kim, Sangpyo Kim, Jung Ho Ahn
Convolutional neural network (CNN) inference using fully homomorphic encryption (FHE) is a promising private inference (PI) solution due to the capability of FHE that enables offloading the whole computation process to the server while protecting the privacy of sensitive user data.
no code implementations • 22 Nov 2021 • SangHun Im, Gibaeg Kim, Heung-Seon Oh, Seongung Jo, Donghwan Kim
Consequently, these models are challenging to implement when the model parameters increase for a large-scale hierarchy because the model structure depends on the hierarchy size.
no code implementations • 12 Aug 2020 • Myoungha Song, Jeongho Lee, Donghwan Kim
In GAP, it is designed to pay attention to important information in geometric information, and CAP is designed to pay attention to important information in Channel information.
1 code implementation • ACL 2020 • Dong Bok Lee, Seanie Lee, Woo Tae Jeong, Donghwan Kim, Sung Ju Hwang
We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models.
Ranked #1 on Question Generation on Natural Questions
no code implementations • LREC 2020 • Sangha Nam, Minho Lee, Donghwan Kim, Kijong Han, Kuntae Kim, Sooji Yoon, Eun-Kyung Kim, Key-Sun Choi
Information extraction from unstructured texts plays a vital role in the field of natural language processing.