Search Results for author: Nakyeong Yang

Found 7 papers, 1 papers with code

Persona is a Double-edged Sword: Mitigating the Negative Impact of Role-playing Prompts in Zero-shot Reasoning Tasks

no code implementations16 Aug 2024 Junseok Kim, Nakyeong Yang, Kyomin Jung

Then, Jekyll \& Hyde collects two potential solutions from role-playing and neutral prompts and selects a better solution using the LLM evaluator.

Position

Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning

no code implementations18 Apr 2024 Nakyeong Yang, Jiwon Moon, Junseok Kim, Yunah Jang, Kyomin Jung

Prompt tuning methods have shown comparable performance to general training methods as parameter-efficient fine-tuning (PEFT) methods in various natural language understanding tasks.

Language Modeling Language Modelling +2

Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination

no code implementations16 Nov 2023 Nakyeong Yang, Taegwan Kang, JungKyu Choi, Honglak Lee, Kyomin Jung

Furthermore, we propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings.

Instruction Following Language Modelling

MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors

1 code implementation15 Aug 2023 Nakyeong Yang, Minsung Kim, Seunghyun Yoon, Joongbo Shin, Kyomin Jung

However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video.

Contrastive Learning Misinformation +4

Multi-View Zero-Shot Open Intent Induction from Dialogues: Multi Domain Batch and Proxy Gradient Transfer

no code implementations23 Mar 2023 Hyukhun Koh, Haesung Pyun, Nakyeong Yang, Kyomin Jung

In Task Oriented Dialogue (TOD) system, detecting and inducing new intents are two main challenges to apply the system in the real world.

Task-specific Compression for Multi-task Language Models using Attribution-based Pruning

no code implementations9 May 2022 Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung

However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.

Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.