no code implementations • 29 Jul 2024 • Kanghyun Choi, Hye Yoon Lee, Dain Kwon, Sunjong Park, Kyuyeun Kim, Noseong Park, Jinho Lee
Data-free quantization (DFQ) is a technique that creates a lightweight network from its full-precision counterpart without the original training data, often through a synthetic dataset.
no code implementations • 18 Jul 2024 • Donghee Choi, Jinkyu Kim, Mogan Gim, Jinho Lee, Jaewoo Kang
To integrate the forecasting model into a deep reinforcement learning-driven portfolio selection framework, we introduced a two-step strategy: first, pre-training the time-series model on market data, followed by fine-tuning the portfolio selection architecture using this model.
no code implementations • 21 Jun 2024 • Hyeyoon Lee, Kanghyun Choi, Dain Kwon, Sunjong Park, Mayoore Selvarasa Jaiswal, Noseong Park, Jonghyun Choi, Jinho Lee
Recent advances in adversarial robustness rely on an abundant set of training data, where using external or additional datasets has become a common setting.
1 code implementation • 28 May 2024 • Jinkyu Yim, Jaeyong Song, Yerim Choi, Jaebeen Lee, Jaewon Jung, Hongsun Jang, Jinho Lee
In addition, they often fail to consider the memory requirement per GPU, often recommending solutions that could not be executed.
1 code implementation • 11 Mar 2024 • Hongsun Jang, Jaeyong Song, Jaewon Jung, Jaeyoung Park, Youngsok Kim, Jinho Lee
Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system.
1 code implementation • CVPR 2024 • Jaewon Jung, Hongsun Jang, Jaeyong Song, Jinho Lee
In this situation, adversarial distillation is a promising option which aims to distill the robustness of the teacher network to improve the robustness of a small student network.
1 code implementation • 12 Nov 2023 • Jaeyong Song, Hongsun Jang, Jaewon Jung, Youngsok Kim, Jinho Lee
Lastly, we introduce One-Hop Graph Masking, a computation and communication structure to realize the above methods in multi-server environments.
1 code implementation • 29 Jan 2023 • Hongsun Jang, Jaewon Jung, Jaeyong Song, Joonsang Yu, Youngsok Kim, Jinho Lee
However, this results in a high overhead of redundant teacher execution, low GPU utilization, and extra data loading.
1 code implementation • 25 Jan 2023 • Mingi Yoo, Jaeyong Song, Jounghoo Lee, Namhyung Kim, Youngsok Kim, Jinho Lee
A GCN takes as input an arbitrarily structured graph and executes a series of layers which exploit the graph's structure to calculate their output features.
no code implementations • 24 Jan 2023 • Mingi Yoo, Jaeyong Song, Hyeyoon Lee, Jounghoo Lee, Namhyung Kim, Youngsok Kim, Jinho Lee
Graph convolutional networks (GCNs) are becoming increasingly popular as they can process a wide variety of data formats that prior deep neural networks cannot easily support.
no code implementations • 24 Jan 2023 • Jaeyong Song, Jinkyu Yim, Jaewon Jung, Hongsun Jang, Hyung-Jin Kim, Youngsok Kim, Jinho Lee
Compressing the communication is one way to mitigate the overhead by reducing the inter-node traffic volume; however, the existing compression techniques have critical limitations to be applied for NLP models with 3D parallelism in that 1) only the data parallelism traffic is targeted, and 2) the existing compression schemes already harm the model quality too much.
no code implementations • 23 Jan 2023 • Deokki Hong, Kanghyun Choi, Hye Yoon Lee, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee
Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems.
no code implementations • 4 Jul 2022 • Jinho Lee, Sungwoo Park, Jungyu Ahn, Jonghun Kwak
Therefore, we use the data of individual stocks to train our neural networks to predict the future performance of individual stocks and use these predictions and the portfolio deposit file (PDF) to construct a portfolio of ETFs.
no code implementations • 1 Jul 2022 • Jonghun Kwak, Jungyu Ahn, Jinho Lee, Sungwoo Park
The finance industry has adopted machine learning (ML) as a form of quantitative research to support better investment decisions, yet there are several challenges often overlooked in practice.
1 code implementation • CVPR 2022 • Kanghyun Choi, Hye Yoon Lee, Deokki Hong, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee
To deal with the performance drop induced by quantization errors, a popular method is to use training data to fine-tune quantized networks.
2 code implementations • NeurIPS 2021 • Kanghyun Choi, Deokki Hong, Noseong Park, Youngsok Kim, Jinho Lee
We find that this is often insufficient to capture the distribution of the original data, especially around the decision boundaries.
Ranked #1 on Data Free Quantization on CIFAR-100
no code implementations • 29 Sep 2021 • Deokki Hong, Kanghyun Choi, Hey Yoon Lee, Joonsang Yu, Youngsok Kim, Noseong Park, Jinho Lee
To handle the hard constraint problem of differentiable co-exploration, we propose ConCoDE, which searches for hard-constrained solutions without compromising the global design objectives.
no code implementations • 18 Aug 2021 • Zhu Baozhou, Peter Hofstee, Jinho Lee, Zaid Al-Ars
To solve the two problems together, we initially propose an attention module for convolutional neural networks by developing an AW-convolution, where the shape of attention maps matches that of the weights rather than the activations.
1 code implementation • 25 May 2021 • Baozhou Zhu, Peter Hofstee, Johan Peltenburg, Jinho Lee, Zaid AlArs
Thus, a common approach is to compute a reconstructed training dataset before compression.
no code implementations • 15 Feb 2021 • Heesu Kim, Hanmin Park, Taehyun Kim, Kwanheum Cho, Eojin Lee, Soojung Ryu, Hyuk-Jae Lee, Kiyoung Choi, Jinho Lee
In this paper, we present GradPIM, a processing-in-memory architecture which accelerates parameter updates of deep neural networks training.
no code implementations • 25 Jan 2021 • Ze-Bin Wu, Daniel Putzky, Asish K. Kundu, Hui Li, Shize Yang, Zengyi Du, Sang Hyun Joo, Jinho Lee, Yimei Zhu, Gennady Logvenov, Bernhard Keimer, Kazuhiro Fujita, Tonica Valla, Ivan Bozovic, Ilya K. Drozdov
This indicates that the pseudogap and superconductivity are of different origins.
Superconductivity Materials Science
1 code implementation • 2 Oct 2020 • Sunghyeon Kim, Hyeyoon Lee, Sunjong Park, Jinho Lee, Keunwoo Choi
In this study, we train deep neural networks to classify composer on a symbolic domain.
no code implementations • 14 Sep 2020 • Kanghyun Choi, Deokki Hong, Hojae Yoon, Joonsang Yu, Youngsok Kim, Jinho Lee
In such circumstances, this work presents DANCE, a differentiable approach towards the co-exploration of the hardware accelerator and network architecture design.
no code implementations • 11 Sep 2020 • Zhu Baozhou, Peter Hofstee, Jinho Lee, Zaid Al-Ars
Inspired by the shortcuts and fractal architectures, we propose two Shortcut-based Fractal Architectures (SoFAr) specifically designed for BCNNs: 1. residual connection-based fractal architectures for binary ResNet, and 2. dense connection-based fractal architectures for binary DenseNet.
no code implementations • 10 Jul 2020 • Jinho Lee, Raehyun Kim, Seok-Won Yi, Jaewoo Kang
Generating an investment strategy using advanced deep learning methods in stock markets has recently been a topic of interest.
no code implementations • 14 Jan 2020 • Inseok Hwang, Jinho Lee, Frank Liu, Minsik Cho
Our intuition is that, the more similarity exists between the unknown data samples and the part of known data that an autoencoder was trained with, the better chances there could be that this autoencoder makes use of its trained knowledge, reconstructing output samples closer to the originals.
no code implementations • 15 Oct 2019 • Mayoore S. Jaiswal, Bumboo Kang, Jinho Lee, Minsik Cho
Target encoding is an effective technique to deliver better performance for conventional machine learning methods, and recently, for deep neural networks as well.
1 code implementation • 28 Feb 2019 • Jinho Lee, Raehyun Kim, Yookyung Koh, Jaewoo Kang
Moreover, the results show that future stock prices can be predicted even if the training and testing procedures are done in different countries.
no code implementations • 25 Dec 2016 • Jinho Lee, Brian Kenji Iwana, Shouta Ide, Seiichi Uchida
Thus, we propose a new and robust tracking method using a Fully Convolutional Network (FCN) to obtain an object probability map and Dynamic Programming (DP) to seek the globally optimal path through all frames of video.