Search Results for author: Donghwan Kim

Found 8 papers, 1 papers with code

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

no code implementations7 Feb 2024 Ahmet Alacaoglu, Donghwan Kim, Stephen J. Wright

With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for $\rho < \frac{1}{L}$.

NeuJeans: Private Neural Network Inference with Joint Optimization of Convolution and Bootstrapping

no code implementations7 Dec 2023 Jae Hyung Ju, Jaiyoung Park, Jongmin Kim, Donghwan Kim, Jung Ho Ahn

NeuJeans accelerates the performance of conv2d by up to 5. 68 times compared to state-of-the-art FHE-based PI work and performs the PI of a CNN at the scale of ImageNet (ResNet18) within a mere few seconds

HyPHEN: A Hybrid Packing Method and Optimizations for Homomorphic Encryption-Based Neural Networks

no code implementations5 Feb 2023 Donghwan Kim, Jaiyoung Park, Jongmin Kim, Sangpyo Kim, Jung Ho Ahn

Convolutional neural network (CNN) inference using fully homomorphic encryption (FHE) is a promising private inference (PI) solution due to the capability of FHE that enables offloading the whole computation process to the server while protecting the privacy of sensitive user data.

Hierarchical Text Classification As Sub-Hierarchy Sequence Generation

no code implementations22 Nov 2021 SangHun Im, Gibaeg Kim, Heung-Seon Oh, Seongung Jo, Donghwan Kim

Consequently, these models are challenging to implement when the model parameters increase for a large-scale hierarchy because the model structure depends on the hierarchy size.

Multi-Label Classification text-classification +1

PAM:Point-wise Attention Module for 6D Object Pose Estimation

no code implementations12 Aug 2020 Myoungha Song, Jeongho Lee, Donghwan Kim

In GAP, it is designed to pay attention to important information in geometric information, and CAP is designed to pay attention to important information in Channel information.

6D Pose Estimation 6D Pose Estimation using RGB +3

Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs

1 code implementation ACL 2020 Dong Bok Lee, Seanie Lee, Woo Tae Jeong, Donghwan Kim, Sung Ju Hwang

We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models.

Question-Answer-Generation Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.