1 code implementation • 6 Dec 2024 • Hanqing Zhu, Zhenyu Zhang, Wenyan Cong, Xi Liu, Sem Park, Vikas Chandra, Bo Long, David Z. Pan, Zhangyang Wang, Jinwon Lee
This memory burden necessitates using more or higher-end GPUs or reducing batch sizes, limiting training scalability and throughput.
no code implementations • 17 May 2023 • Hyeonggeun Yun, Younggeol Cho, Jinwon Lee, Arim Ha, Jihyeok Yun
Then, we design and implement the simulation models based on a conditional variational autoencoder (CVAE).
no code implementations • 9 Jan 2023 • Seungeun Lim, Changmo Yeo, Fazhi He, Jinwon Lee, Duhwan Mun
Machining feature recognition tests were conducted for two test cases using the proposed method, and all machining features included in the test cases were successfully recognized.
4 code implementations • 20 Apr 2020 • Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, Nojun Kwak
To solve this problem, we propose LSQ+, a natural extension of LSQ, wherein we introduce a general asymmetric quantization scheme with trainable scale and offset parameters that can learn to accommodate the negative activations.
Ranked #18 on
Quantization
on ImageNet
no code implementations • 4 Mar 2020 • Byung Hoon Ahn, Jinwon Lee, Jamie Menjay Lin, Hsin-Pai Cheng, Jilei Hou, Hadi Esmaeilzadeh
To address this standing issue, we present a memory-aware compiler, dubbed SERENITY, that utilizes dynamic programming to find a sequence that finds a schedule with optimal memory footprint.
no code implementations • 28 Feb 2020 • Kambiz Azarian, Yash Bhalgat, Jinwon Lee, Tijmen Blankevoort
This is in contrast to other methods that search for per-layer thresholds via a computationally intensive iterative pruning and fine-tuning process.
no code implementations • 28 Nov 2019 • Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, Nojun Kwak
First, Self-studying (SS) phase fine-tunes a quantized low-precision student network without KD to obtain a good initialization.
no code implementations • 17 Jan 2019 • Jay H. Park, Sunghwan Kim, Jinwon Lee, Myeongjae Jeon, Sam H. Noh
Through analysis of the characteristics of CNN, we find that placement of layers can be done in an effective manner.
Distributed, Parallel, and Cluster Computing
no code implementations • 14 Nov 2018 • Yang Yang, Anusha Lalitha, Jinwon Lee, Chris Lott
For a given grammar set, a set of potential grammar expressions (candidate set) for augmentation is constructed from an AM-specific statistical pronunciation dictionary that captures the consistent patterns and errors in the decoding of AM induced by variations in pronunciation, pitch, tempo, accent, ambiguous spellings, and noise conditions.
6 code implementations • 22 Sep 2016 • Se Rim Park, Jinwon Lee
In hearing aids, the presence of babble noise degrades hearing intelligibility of human speech greatly.