1 code implementation • 25 Oct 2022 • Jaehee Jang, Heonseok Ha, Dahuin Jung, Sungroh Yoon
While the existing methods require the collection of auxiliary data or model weights to generate a counterpart, FedClassAvg only requires clients to communicate with a couple of fully connected layers, which is highly communication-efficient.
no code implementations • 23 Oct 2021 • Byunggook Na, Jaehee Jang, Seongsik Park, Seijoon Kim, Joonoo Kim, Moon Sik Jeong, Kwang Choon Kim, Seon Heo, Yoonsang Kim, Sungroh Yoon
We implemented large-batch synchronous training of DNNs based on Caffe, a deep learning library.
no code implementations • 31 Jul 2018 • Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications.
no code implementations • 21 May 2018 • Seongsik Park, Jaehee Jang, Seijoon Kim, Sungroh Yoon
Memory-augmented neural networks (MANNs) are designed for question-answering tasks.
no code implementations • 28 Nov 2017 • Jaehee Jang, Byungook Na, Sungroh Yoon
Distributed training of deep neural networks has received significant research interest, and its major approaches include implementations on multiple GPUs and clusters.
no code implementations • 26 Feb 2016 • Hanjoo Kim, Jae-hong Park, Jaehee Jang, Sungroh Yoon
The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training.