no code implementations • 28 Mar 2024 • Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, Chanjun Park
As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important.
no code implementations • 28 Mar 2024 • Hyunbyung Park, Sukyung Lee, Gyoungjin Gim, Yungi Kim, Dahyun Kim, Chanjun Park
To address the challenges associated with data processing at scale, we propose Dataverse, a unified open-source Extract-Transform-Load (ETL) pipeline for large language models (LLMs) with a user-friendly design at its core.
no code implementations • 4 Mar 2024 • Chanjun Park, Minsoo Khang, Dahyun Kim
This paper delves into the contrasting roles of data within academic and industrial spheres, highlighting the divergence between Data-Centric AI and Model-Agnostic AI approaches.
2 code implementations • 23 Dec 2023 • Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung Lee, Wonho Song, Yunsu Kim, Hyeonwoo Kim, Yungi Kim, Hyeonju Lee, Jihoo Kim, Changbae Ahn, Seonghoon Yang, Sukyung Lee, Hyunbyung Park, Gyoungjin Gim, Mikyoung Cha, Hwalsuk Lee, Sunghun Kim
We introduce SOLAR 10. 7B, a large language model (LLM) with 10. 7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks.
1 code implementation • 15 Dec 2023 • Sunjae Yoon, Dahyun Kim, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chnag D. Yoo
Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history.
no code implementations • ICCV 2023 • Sunjae Yoon, Gwanhyeong Koo, Dahyun Kim, Chang D. Yoo
These proposals are assumed to contain many distinguishable scenes in a video as candidates.
1 code implementation • ICCV 2023 • Dongkwon Jin, Dahyun Kim, Chang-Su Kim
A novel algorithm to detect road lanes in videos, called recursive video lane detector (RVLD), is proposed in this paper, which propagates the state of a current frame recursively to the next frame.
2 code implementations • 20 Jun 2023 • Haeyong Kang, Jaehong Yoon, Dahyun Kim, Sung Ju Hwang, Chang D Yoo
Motivated by continual learning, this work investigates how to accumulate and transfer neural implicit representations for multiple complex video data over sequential encoding sessions.
1 code implementation • 17 Oct 2022 • Sunjae Yoon, Ji Woo Hong, Eunseop Yoon, Dahyun Kim, Junyeong Kim, Hee Suk Yoon, Chang D. Yoo
Video moment retrieval (VMR) aims to localize target moments in untrimmed videos pertinent to a given textual query.
1 code implementation • ICLR 2022 • Hyunseo Koh, Dahyun Kim, Jung-Woo Ha, Jonghyun Choi
For better practicality, we first propose a novel continual learning setup that is online, task-free, class-incremental, of blurry task boundaries and subject to inference queries at any moment.
1 code implementation • CVPR 2022 • Dahyun Kim, Jonghyun Choi
To accelerate deployment of models with the benefit of unsupervised representation learning to such resource limited devices for various downstream tasks, we propose a self-supervised learning method for binary networks that uses a moving target network.
1 code implementation • 16 Oct 2021 • Dahyun Kim, Kunal Pratap Singh, Jonghyun Choi
Questioning that the architectures designed for FP networks might not be the best for binary networks, we propose to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective.
no code implementations • 24 Mar 2021 • Junyeong Kim, Sunjae Yoon, Dahyun Kim, Chang D. Yoo
A video-grounded dialogue system referred to as the Structured Co-reference Graph Attention (SCGA) is presented for decoding the answer sequence to a question regarding a given video while keeping track of the dialogue context.
1 code implementation • ECCV 2020 • Dahyun Kim, Kunal Pratap Singh, Jonghyun Choi
Specifically, based on the cell based search method, we define the new search space of binary layer types, design a new cell template, and rediscover the utility of and propose to use the Zeroise layer instead of using it as a placeholder.
no code implementations • 3 Feb 2019 • Dahyun Kim, Jihwan Bae, Yeonsik Jo, Jonghyun Choi
Incremental learning suffers from two challenging problems; forgetting of old knowledge and intransigence on learning new knowledge.