Search Results for author: Soyeon Kim

Found 12 papers, 4 papers with code

Example-Based Concept Analysis Framework for Deep Weather Forecast Models

no code implementations1 Apr 2025 Soyeon Kim, JunHo Choi, SuBeen Lee, Jaesik Choi

To fill this gap, we follow a user-centric process to develop an example-based concept analysis framework, which identifies cases that follow a similar inference process as the target instance in a target model and presents them in a user-comprehensible format.

Weather Forecasting

Diverse Rare Sample Generation with Pretrained GANs

1 code implementation27 Dec 2024 SuBeen Lee, Jiyeon Han, Soyeon Kim, Jaesik Choi

This study proposes a novel approach for generating diverse rare samples from high-resolution image datasets with pretrained GANs.

Density Estimation Diversity

Capsule Neural Networks as Noise Stabilizer for Time Series Data

no code implementations20 Mar 2024 Soyeon Kim, Jihyeon Seong, Hyunkyung Han, Jaesik Choi

In this paper, we investigate the effectiveness of CapsNets in analyzing highly sensitive and noisy time series sensor data.

Adversarial Attack Time Series +1

ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models

1 code implementation8 Mar 2024 Jio Oh, Soyeon Kim, Junseok Seo, Jindong Wang, Ruochen Xu, Xing Xie, Steven Euijong Whang

Unlike knowledge graphs, which are also used to evaluate LLMs, relational databases have integrity constraints that can be used to better construct complex in-depth questions and verify answers: (1) functional dependencies can be used to pinpoint critical keywords that an LLM must know to properly answer a given question containing certain attribute values; and (2) foreign key constraints can be used to join relations and construct multi-hop questions, which can be arbitrarily long and used to debug intermediate answers.

Attribute Hallucination +2

DaG LLM ver 1.0: Pioneering Instruction-Tuned Language Modeling for Korean NLP

no code implementations23 Nov 2023 Dongjun Jang, Sangah Lee, Sungjoo Byun, Jinwoong Kim, Jean Seo, Minseok Kim, Soyeon Kim, Chaeyoung Oh, Jaeyoon Kim, Hyemi Jo, Hyopil Shin

This paper presents the DaG LLM (David and Goliath Large Language Model), a language model specialized for Korean and fine-tuned through Instruction Tuning across 41 tasks within 13 distinct categories.

Language Modeling Language Modelling +1

Conditional Temporal Neural Processes with Covariance Loss

1 code implementation ICML 2021 Boseon Yoo, Jiwoo Lee, Janghoon Ju, Seijun Chung, Soyeon Kim, Jaesik Choi

We introduce a novel loss function, Covariance Loss, which is conceptually equivalent to conditional neural processes and has a form of regularization so that is applicable to many kinds of neural networks.

Time Series Forecasting Traffic Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.