Search Results for author: Kibum Kim

Found 6 papers, 6 papers with code

Self-Guided Robust Graph Structure Refinement

1 code implementation19 Feb 2024 Yeonjun In, Kanghoon Yoon, Kibum Kim, Kijung Shin, Chanyoung Park

However, we have discovered that existing GSR methods are limited by narrowassumptions, such as assuming clean node features, moderate structural attacks, and the availability of external clean graphs, resulting in the restricted applicability in real-world scenarios.

Adaptive Self-training Framework for Fine-grained Scene Graph Generation

1 code implementation18 Jan 2024 Kibum Kim, Kanghoon Yoon, Yeonjun In, Jinyoung Moon, Donghyun Kim, Chanyoung Park

To this end, we introduce a Self-Training framework for SGG (ST-SGG) that assigns pseudo-labels for unannotated triplets based on which the SGG models are trained.

Graph Generation Scene Graph Generation

LLM4SGG: Large Language Models for Weakly Supervised Scene Graph Generation

1 code implementation16 Oct 2023 Kibum Kim, Kanghoon Yoon, Jaehyeong Jeon, Yeonjun In, Jinyoung Moon, Donghyun Kim, Chanyoung Park

Weakly-Supervised Scene Graph Generation (WSSGG) research has recently emerged as an alternative to the fully-supervised approach that heavily relies on costly annotations.

Few-Shot Learning Large Language Model +2

MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation

1 code implementation17 Apr 2023 Kibum Kim, Dongmin Hyun, Sukwon Yun, Chanyoung Park

The long-tailed problem is a long-standing challenge in Sequential Recommender Systems (SRS) in which the problem exists in terms of both users and items.

Sequential Recommendation

Unbiased Heterogeneous Scene Graph Generation with Relation-aware Message Passing Neural Network

1 code implementation1 Dec 2022 Kanghoon Yoon, Kibum Kim, Jinyoung Moon, Chanyoung Park

Recent scene graph generation (SGG) frameworks have focused on learning complex relationships among multiple objects in an image.

Graph Generation Relation +2

LTE4G: Long-Tail Experts for Graph Neural Networks

1 code implementation22 Aug 2022 Sukwon Yun, Kibum Kim, Kanghoon Yoon, Chanyoung Park

After having trained an expert for each balanced subset, we adopt knowledge distillation to obtain two class-wise students, i. e., Head class student and Tail class student, each of which is responsible for classifying nodes in the head classes and tail classes, respectively.

Knowledge Distillation Node Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.