Search Results for author: Mingu Kang

Found 13 papers, 1 papers with code

DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding

no code implementations29 Nov 2024 Jungbin Cho, Junwan Kim, Jisoo Kim, Minseo Kim, Mingu Kang, Sungeun Hong, Tae-Hyun Oh, Youngjae Yu

To resolve this "discord" between discrete and continuous representations, we introduce DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding, a novel method that decodes discrete motion tokens into continuous motion through rectified flow.

Motion Synthesis Quantization

SoftLMs: Efficient Adaptive Low-Rank Approximation of Language Models using Soft-Thresholding Mechanism

no code implementations15 Nov 2024 Priyansh Bhatnagar, Linfeng Wen, Mingu Kang

In this paper, we propose a novel compression methodology that dynamically determines the rank of each layer using a soft thresholding mechanism, which clips the singular values with a small magnitude in a differentiable form.

Decision Making Decoder +1

MaD-Scientist: AI-based Scientist solving Convection-Diffusion-Reaction Equations Using Massive PINN-Based Prior Data

no code implementations9 Oct 2024 Mingu Kang, Dongseok Lee, Woojin Cho, Jaehyeon Park, Kookjin Lee, Anthony Gruber, Youngjoon Hong, Noseong Park

Large language models (LLMs), like ChatGPT, have shown that even trained with noisy prior data, they can generalize effectively to new tasks through in-context learning (ICL) and pre-training techniques.

In-Context Learning

FSL-HDnn: A 5.7 TOPS/W End-to-end Few-shot Learning Classifier Accelerator with Feature Extraction and Hyperdimensional Computing

no code implementations17 Sep 2024 Haichao Yang, Chang Eun Song, Weihong Xu, Behnam Khaleghi, Uday Mallappa, Monil Shah, Keming Fan, Mingu Kang, Tajana Rosing

This paper introduces FSL-HDnn, an energy-efficient accelerator that implements the end-to-end pipeline of feature extraction, classification, and on-chip few-shot learning (FSL) through gradient-free learning techniques in a 40 nm CMOS process.

Clustering Few-Shot Learning

An Analog and Digital Hybrid Attention Accelerator for Transformers with Charge-based In-memory Computing

no code implementations8 Sep 2024 Ashkan Moradifirouzabadi, Divya Sri Dodla, Mingu Kang

The attention mechanism is a key computing kernel of Transformers, calculating pairwise correlations across the entire input sequence.

SPADE: Sparse Pillar-based 3D Object Detection Accelerator for Autonomous Driving

no code implementations12 May 2023 Minjae Lee, Seongmin Park, Hyungmin Kim, Minyong Yoon, Janghwan Lee, Jun Won Choi, Nam Sung Kim, Mingu Kang, Jungwook Choi

3D object detection using point cloud (PC) data is essential for perception pipelines of autonomous driving, where efficient encoding is key to meeting stringent resource and latency requirements.

3D Object Detection Autonomous Driving +2

Benchmarking Self-Supervised Learning on Diverse Pathology Datasets

1 code implementation CVPR 2023 Mingu Kang, Heon Song, Seonwook Park, Donggeun Yoo, Sérgio Pereira

To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date.

Ranked #2 on Classification on MHIST (using extra training data)

Benchmarking Classification +3

Sparse Attention Acceleration with Synergistic In-Memory Pruning and On-Chip Recomputation

no code implementations1 Sep 2022 Amir Yazdanbakhsh, Ashkan Moradifirouzabadi, Zheng Li, Mingu Kang

The combined in-memory pruning and on-chip recompute of the relevant attention scores enables SPRINT to transform quadratic complexity to a merely linear one.

Accelerating Attention through Gradient-Based Learned Runtime Pruning

no code implementations7 Apr 2022 Zheng Li, Soroush Ghodrati, Amir Yazdanbakhsh, Hadi Esmaeilzadeh, Mingu Kang

To best utilize this mathematical innovation, we devise a bit-serial architecture, dubbed LeOPArd, for transformer language models with bit-level early termination microarchitectural mechanism.

Sentence

RankingMatch: Delving into Semi-Supervised Learning with Consistency Regularization and Ranking Loss

no code implementations9 Oct 2021 Trung Q. Tran, Mingu Kang, Daeyoung Kim

Semi-supervised learning (SSL) has played an important role in leveraging unlabeled data when labeled data is limited.

Computational Efficiency Triplet

ReRankMatch: Semi-Supervised Learning with Semantics-Oriented Similarity Representation

no code implementations12 Feb 2021 Trung Quang Tran, Mingu Kang, Daeyoung Kim

We obtain promising results (4. 21% error rate on CIFAR-10 with 4000 labels, 22. 32% error rate on CIFAR-100 with 10000 labels, and 2. 19% error rate on SVHN with 1000 labels) when the amount of labeled data is sufficient to learn semantics-oriented similarity representation.

Applying Tensor Decomposition to image for Robustness against Adversarial Attack

no code implementations28 Feb 2020 Seungju Cho, Tae Joon Jun, Mingu Kang, Daeyoung Kim

However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack.

Adversarial Attack Deep Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.