1 code implementation • 19 Feb 2024 • Yeonjun In, Kanghoon Yoon, Kibum Kim, Kijung Shin, Chanyoung Park
However, we have discovered that existing GSR methods are limited by narrowassumptions, such as assuming clean node features, moderate structural attacks, and the availability of external clean graphs, resulting in the restricted applicability in real-world scenarios.
1 code implementation • 18 Jan 2024 • Kibum Kim, Kanghoon Yoon, Yeonjun In, Jinyoung Moon, Donghyun Kim, Chanyoung Park
To this end, we introduce a Self-Training framework for SGG (ST-SGG) that assigns pseudo-labels for unannotated triplets based on which the SGG models are trained.
1 code implementation • 16 Oct 2023 • Kibum Kim, Kanghoon Yoon, Jaehyeong Jeon, Yeonjun In, Jinyoung Moon, Donghyun Kim, Chanyoung Park
Weakly-Supervised Scene Graph Generation (WSSGG) research has recently emerged as an alternative to the fully-supervised approach that heavily relies on costly annotations.
1 code implementation • 22 Aug 2023 • JungHoon Kim, Yeonjun In, Kanghoon Yoon, Junmo Lee, Chanyoung Park
Unsupervised GAD methods assume the lack of anomaly labels, i. e., whether a node is anomalous or not.
1 code implementation • 24 Jun 2023 • Yeonjun In, Kanghoon Yoon, Chanyoung Park
Recent works demonstrate that GNN models are vulnerable to adversarial attacks, which refer to imperceptible perturbation on the graph structure and node features.
1 code implementation • 29 May 2023 • Namkyeong Lee, Kanghoon Yoon, Gyoung S. Na, Sein Kim, Chanyoung Park
To do so, we first assume a causal relationship based on the domain knowledge of molecular sciences and construct a structural causal model (SCM) that reveals the relationship between variables.
1 code implementation • 1 Dec 2022 • Kanghoon Yoon, Kibum Kim, Jinyoung Moon, Chanyoung Park
Recent scene graph generation (SGG) frameworks have focused on learning complex relationships among multiple objects in an image.
1 code implementation • 22 Aug 2022 • Sukwon Yun, Kibum Kim, Kanghoon Yoon, Chanyoung Park
After having trained an expert for each balanced subset, we adopt knowledge distillation to obtain two class-wise students, i. e., Head class student and Tail class student, each of which is responsible for classifying nodes in the head classes and tail classes, respectively.