no code implementations • 4 Jul 2022 • Jinho Lee, Sungwoo Park, Jungyu Ahn, Jonghun Kwak
Therefore, we use the data of individual stocks to train our neural networks to predict the future performance of individual stocks and use these predictions and the portfolio deposit file (PDF) to construct a portfolio of ETFs.
no code implementations • 1 Jul 2022 • Jonghun Kwak, Jungyu Ahn, Jinho Lee, Sungwoo Park
The finance industry has adopted machine learning (ML) as a form of quantitative research to support better investment decisions, yet there are several challenges often overlooked in practice.
1 code implementation • CVPR 2022 • Kanghyun Choi, Hye Yoon Lee, Deokki Hong, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee
To deal with the performance drop induced by quantization errors, a popular method is to use training data to fine-tune quantized networks.
2 code implementations • NeurIPS 2021 • Kanghyun Choi, Deokki Hong, Noseong Park, Youngsok Kim, Jinho Lee
We find that this is often insufficient to capture the distribution of the original data, especially around the decision boundaries.
Ranked #1 on
Data Free Quantization
on CIFAR10
no code implementations • 29 Sep 2021 • Deokki Hong, Kanghyun Choi, Hey Yoon Lee, Joonsang Yu, Youngsok Kim, Noseong Park, Jinho Lee
To handle the hard constraint problem of differentiable co-exploration, we propose ConCoDE, which searches for hard-constrained solutions without compromising the global design objectives.
no code implementations • 18 Aug 2021 • Zhu Baozhou, Peter Hofstee, Jinho Lee, Zaid Al-Ars
To solve the two problems together, we initially propose an attention module for convolutional neural networks by developing an AW-convolution, where the shape of attention maps matches that of the weights rather than the activations.
1 code implementation • 25 May 2021 • Baozhou Zhu, Peter Hofstee, Johan Peltenburg, Jinho Lee, Zaid AlArs
Thus, a common approach is to compute a reconstructed training dataset before compression.
no code implementations • 15 Feb 2021 • Heesu Kim, Hanmin Park, Taehyun Kim, Kwanheum Cho, Eojin Lee, Soojung Ryu, Hyuk-Jae Lee, Kiyoung Choi, Jinho Lee
In this paper, we present GradPIM, a processing-in-memory architecture which accelerates parameter updates of deep neural networks training.
no code implementations • 25 Jan 2021 • Ze-Bin Wu, Daniel Putzky, Asish K. Kundu, Hui Li, Shize Yang, Zengyi Du, Sang Hyun Joo, Jinho Lee, Yimei Zhu, Gennady Logvenov, Bernhard Keimer, Kazuhiro Fujita, Tonica Valla, Ivan Bozovic, Ilya K. Drozdov
This indicates that the pseudogap and superconductivity are of different origins.
Superconductivity Materials Science
1 code implementation • 2 Oct 2020 • Sunghyeon Kim, Hyeyoon Lee, Sunjong Park, Jinho Lee, Keunwoo Choi
In this study, we train deep neural networks to classify composer on a symbolic domain.
no code implementations • 14 Sep 2020 • Kanghyun Choi, Deokki Hong, Hojae Yoon, Joonsang Yu, Youngsok Kim, Jinho Lee
In such circumstances, this work presents DANCE, a differentiable approach towards the co-exploration of the hardware accelerator and network architecture design.
no code implementations • 11 Sep 2020 • Zhu Baozhou, Peter Hofstee, Jinho Lee, Zaid Al-Ars
Inspired by the shortcuts and fractal architectures, we propose two Shortcut-based Fractal Architectures (SoFAr) specifically designed for BCNNs: 1. residual connection-based fractal architectures for binary ResNet, and 2. dense connection-based fractal architectures for binary DenseNet.
no code implementations • 10 Jul 2020 • Jinho Lee, Raehyun Kim, Seok-Won Yi, Jaewoo Kang
Generating an investment strategy using advanced deep learning methods in stock markets has recently been a topic of interest.
no code implementations • 14 Jan 2020 • Inseok Hwang, Jinho Lee, Frank Liu, Minsik Cho
Our intuition is that, the more similarity exists between the unknown data samples and the part of known data that an autoencoder was trained with, the better chances there could be that this autoencoder makes use of its trained knowledge, reconstructing output samples closer to the originals.
no code implementations • 15 Oct 2019 • Mayoore S. Jaiswal, Bumboo Kang, Jinho Lee, Minsik Cho
Target encoding is an effective technique to deliver better performance for conventional machine learning methods, and recently, for deep neural networks as well.
1 code implementation • 28 Feb 2019 • Jinho Lee, Raehyun Kim, Yookyung Koh, Jaewoo Kang
Moreover, the results show that future stock prices can be predicted even if the training and testing procedures are done in different countries.
no code implementations • 25 Dec 2016 • Jinho Lee, Brian Kenji Iwana, Shouta Ide, Seiichi Uchida
Thus, we propose a new and robust tracking method using a Fully Convolutional Network (FCN) to obtain an object probability map and Dynamic Programming (DP) to seek the globally optimal path through all frames of video.