no code implementations • 29 Nov 2024 • Jungbin Cho, Junwan Kim, Jisoo Kim, Minseo Kim, Mingu Kang, Sungeun Hong, Tae-Hyun Oh, Youngjae Yu
To resolve this "discord" between discrete and continuous representations, we introduce DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding, a novel method that decodes discrete motion tokens into continuous motion through rectified flow.
Ranked #4 on
Motion Synthesis
on HumanML3D
no code implementations • 15 Nov 2024 • Priyansh Bhatnagar, Linfeng Wen, Mingu Kang
In this paper, we propose a novel compression methodology that dynamically determines the rank of each layer using a soft thresholding mechanism, which clips the singular values with a small magnitude in a differentiable form.
no code implementations • 9 Oct 2024 • Mingu Kang, Dongseok Lee, Woojin Cho, Jaehyeon Park, Kookjin Lee, Anthony Gruber, Youngjoon Hong, Noseong Park
Large language models (LLMs), like ChatGPT, have shown that even trained with noisy prior data, they can generalize effectively to new tasks through in-context learning (ICL) and pre-training techniques.
no code implementations • 17 Sep 2024 • Haichao Yang, Chang Eun Song, Weihong Xu, Behnam Khaleghi, Uday Mallappa, Monil Shah, Keming Fan, Mingu Kang, Tajana Rosing
This paper introduces FSL-HDnn, an energy-efficient accelerator that implements the end-to-end pipeline of feature extraction, classification, and on-chip few-shot learning (FSL) through gradient-free learning techniques in a 40 nm CMOS process.
no code implementations • 8 Sep 2024 • Ashkan Moradifirouzabadi, Divya Sri Dodla, Mingu Kang
The attention mechanism is a key computing kernel of Transformers, calculating pairwise correlations across the entire input sequence.
no code implementations • 12 May 2023 • Minjae Lee, Seongmin Park, Hyungmin Kim, Minyong Yoon, Janghwan Lee, Jun Won Choi, Nam Sung Kim, Mingu Kang, Jungwook Choi
3D object detection using point cloud (PC) data is essential for perception pipelines of autonomous driving, where efficient encoding is key to meeting stringent resource and latency requirements.
1 code implementation • CVPR 2023 • Mingu Kang, Heon Song, Seonwook Park, Donggeun Yoo, Sérgio Pereira
To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date.
Ranked #2 on
Classification
on MHIST
(using extra training data)
no code implementations • 1 Sep 2022 • Amir Yazdanbakhsh, Ashkan Moradifirouzabadi, Zheng Li, Mingu Kang
The combined in-memory pruning and on-chip recompute of the relevant attention scores enables SPRINT to transform quadratic complexity to a merely linear one.
no code implementations • 7 Apr 2022 • Zheng Li, Soroush Ghodrati, Amir Yazdanbakhsh, Hadi Esmaeilzadeh, Mingu Kang
To best utilize this mathematical innovation, we devise a bit-serial architecture, dubbed LeOPArd, for transformer language models with bit-level early termination microarchitectural mechanism.
no code implementations • 9 Oct 2021 • Trung Q. Tran, Mingu Kang, Daeyoung Kim
Semi-supervised learning (SSL) has played an important role in leveraging unlabeled data when labeled data is limited.
no code implementations • 15 Feb 2021 • Mingu Kang, Trung Quang Tran, Seungju Cho, Daeyoung Kim
Adversarial attack is aimed at fooling the target classifier with imperceptible perturbation.
no code implementations • 12 Feb 2021 • Trung Quang Tran, Mingu Kang, Daeyoung Kim
We obtain promising results (4. 21% error rate on CIFAR-10 with 4000 labels, 22. 32% error rate on CIFAR-100 with 10000 labels, and 2. 19% error rate on SVHN with 1000 labels) when the amount of labeled data is sufficient to learn semantics-oriented similarity representation.
no code implementations • 28 Feb 2020 • Seungju Cho, Tae Joon Jun, Mingu Kang, Daeyoung Kim
However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack.