Search Results for author: Youngwook Kim

Found 6 papers, 3 papers with code

Generalizable Implicit Hate Speech Detection Using Contrastive Learning

1 code implementation COLING 2022 Youngwook Kim, Shinwoo Park, Yo-Sub Han

However, it is challenging to identify implicit hate speech in nuance or context when there are insufficient lexical cues.

Contrastive Learning Hate Speech Detection

Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification

2 code implementations CVPR 2023 Youngwook Kim, Jae Myung Kim, Jieun Jeong, Cordelia Schmid, Zeynep Akata, Jungwoo Lee

Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.

Classification Multi-Label Classification

Large Loss Matters in Weakly Supervised Multi-Label Classification

1 code implementation CVPR 2022 Youngwook Kim, Jae Myung Kim, Zeynep Akata, Jungwoo Lee

In this work, we first regard unobserved labels as negative labels, casting the WSML task into noisy multi-label classification.

Classification Memorization +1

Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning

no code implementations NAACL 2022 Yu Jin Kim, Beong-woo Kwak, Youngwook Kim, Reinald Kim Amplayo, Seung-won Hwang, Jinyoung Yeo

Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework.

Knowledge Graphs Transfer Learning

Dual Task Framework for Improving Persona-grounded Dialogue Dataset

no code implementations11 Feb 2022 Minju Kim, Beong-woo Kwak, Youngwook Kim, Hong-in Lee, Seung-won Hwang, Jinyoung Yeo

This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents.

Benchmarking

TrustAL: Trustworthy Active Learning using Knowledge Distillation

no code implementations26 Jan 2022 Beong-woo Kwak, Youngwook Kim, Yu Jin Kim, Seung-won Hwang, Jinyoung Yeo

A traditional view of data acquisition is that, through iterations, knowledge from human labels and models is implicitly distilled to monotonically increase the accuracy and label consistency.

Active Learning Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.