Search Results for author: Yunah Jang

Found 5 papers, 1 papers with code

Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning

no code implementations18 Apr 2024 Nakyeong Yang, Jiwon Moon, Junseok Kim, Yunah Jang, Kyomin Jung

Prompt tuning methods have shown comparable performance to general training methods as parameter-efficient fine-tuning (PEFT) methods in various natural language understanding tasks.

Language Modelling Natural Language Understanding +1

IterCQR: Iterative Conversational Query Reformulation with Retrieval Guidance

1 code implementation16 Nov 2023 Yunah Jang, Kang-il Lee, Hyunkyung Bae, Hwanhee Lee, Kyomin Jung

To address these challenges, we propose Iterative Conversational Query Reformulation (IterCQR), a methodology that conducts query reformulation without relying on human rewrites.

Conversational Search Information Retrieval +1

Task-specific Compression for Multi-task Language Models using Attribution-based Pruning

no code implementations9 May 2022 Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung

However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.

Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.