Search Results for author: DongHyun Lee

Found 7 papers, 1 papers with code

ReSpike: Residual Frames-based Hybrid Spiking Neural Networks for Efficient Action Recognition

no code implementations3 Sep 2024 Shiting Xiao, Yuhang Li, Youngeun Kim, DongHyun Lee, Priyadarshini Panda

Spiking Neural Networks (SNNs) have emerged as a compelling, energy-efficient alternative to traditional Artificial Neural Networks (ANNs) for static image tasks such as image classification and segmentation.

Action Recognition Image Classification +1

Decoupled Marked Temporal Point Process using Neural Ordinary Differential Equations

no code implementations10 Jun 2024 Yujee Song, DongHyun Lee, Rui Meng, Won Hwa Kim

While most previous studies focus on the inter-event dependencies and their representations, how individual events influence the overall dynamics over time has been under-explored.

Density Estimation

CATS: Contextually-Aware Thresholding for Sparsity in Large Language Models

1 code implementation12 Apr 2024 Je-Yong Lee, DongHyun Lee, Genghan Zhang, Mo Tiwari, Azalia Mirhoseini

We demonstrate that CATS can be applied to various base models, including Mistral-7B and Llama2-7B, and outperforms existing sparsification techniques in downstream task performance.

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

no code implementations15 Jan 2024 DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.

Tensor Decomposition

GenQ: Quantization in Low Data Regimes with Generative Synthetic Data

no code implementations7 Dec 2023 Yuhang Li, Youngeun Kim, DongHyun Lee, Souvik Kundu, Priyadarshini Panda

In the realm of deep neural network deployment, low-bit quantization presents a promising avenue for enhancing computational efficiency.

Computational Efficiency Quantization +1

Faster Maximum Inner Product Search in High Dimensions

no code implementations14 Dec 2022 Mo Tiwari, Ryan Kang, Je-Yong Lee, DongHyun Lee, Chris Piech, Sebastian Thrun, Ilan Shomorony, Martin Jinye Zhang

We provide theoretical guarantees that BanditMIPS returns the correct answer with high probability, while improving the complexity in $d$ from $O(\sqrt{d})$ to $O(1)$.

Multi-Armed Bandits Recommendation Systems +1

Data-free mixed-precision quantization using novel sensitivity metric

no code implementations18 Mar 2021 DongHyun Lee, Minkyoung Cho, Seungwon Lee, Joonho Song, Changkyu Choi

Post-training quantization is a representative technique for compressing neural networks, making them smaller and more efficient for deployment on edge devices.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.