Search Results for author: Joo-Young Kim

Found 5 papers, 0 papers with code

Darwin: A DRAM-based Multi-level Processing-in-Memory Architecture for Data Analytics

no code implementations23 May 2023 Donghyuk Kim, Jae-Young Kim, Wontak Han, Jongsoon Won, Haerang Choi, Yongkee Kwon, Joo-Young Kim

In this paper, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics, which fully exploits the internal bandwidth of DRAM using the bank-, bank group-, chip-, and rank-level parallelisms.

LearningGroup: A Real-Time Sparse Training on FPGA via Learnable Weight Grouping for Multi-Agent Reinforcement Learning

no code implementations29 Oct 2022 Je Yang, JaeUk Kim, Joo-Young Kim

Unlike supervised model or single-agent reinforcement learning, which actively exploits network pruning, it is obscure that how pruning will work in multi-agent reinforcement learning with its cooperative and interactive characteristics.

Multi-agent Reinforcement Learning Network Pruning +3

DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

no code implementations22 Sep 2022 Seongmin Hong, Seungjae Moon, Junsoo Kim, Sungjae Lee, Minsub Kim, Dongsoo Lee, Joo-Young Kim

DFX is also 8. 21x more cost-effective than the GPU appliance, suggesting that it is a promising solution for text generation workloads in cloud datacenters.

Language Modelling Text Generation

Accelerating Large-Scale Graph-based Nearest Neighbor Search on a Computational Storage Platform

no code implementations12 Jul 2022 Ji-Hoon Kim, Yeo-Reum Park, Jaeyoung Do, Soo-Young Ji, Joo-Young Kim

In this paper, we propose a computational storage platform that can accelerate a large-scale graph-based nearest neighbor search algorithm based on SmartSSD CSD.

FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism

no code implementations24 Feb 2021 Je Yang, Seongmin Hong, Joo-Young Kim

In this paper, we present a deep reinforcement learning platform named FIXAR which employs fixed-point data types and arithmetic units for the first time using a SW/HW co-design approach.

Quantization Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.