Search Results for author: Ting-Yun Chang

Found 12 papers, 7 papers with code

When Parts Are Greater Than Sums: Individual LLM Components Can Outperform Full Models

1 code implementation19 Jun 2024 Ting-Yun Chang, Jesse Thomason, Robin Jia

This paper studies in-context learning by decomposing the output of large language models into the individual contributions of attention heads and MLPs (components).

In-Context Learning

Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks

2 code implementations15 Nov 2023 Ting-Yun Chang, Jesse Thomason, Robin Jia

On the other hand, even successful methods identify neurons that are not specific to a single memorized sequence.

Benchmarking Network Pruning

Data Curation Alone Can Stabilize In-context Learning

1 code implementation20 Dec 2022 Ting-Yun Chang, Robin Jia

Across five tasks and two LLMs, sampling from stable subsets selected by CondAcc and Datamodels improves average accuracy over sampling from the entire training set by 7. 7% and 6. 3%, respectively.

Diversity In-Context Learning +1

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

1 code implementation18 Jun 2022 Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason

Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks.

Continual Learning Transfer Learning

Rethinking Why Intermediate-Task Fine-Tuning Works

1 code implementation Findings (EMNLP) 2021 Ting-Yun Chang, Chi-Jen Lu

Supplementary Training on Intermediate Labeled-data Tasks (STILTs) is a widely applied technique, which first fine-tunes the pretrained language models on an intermediate task before on the target task of interest.

Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense

no code implementations12 May 2021 Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur

Towards improving language models' social intelligence, we focus on the Social IQA dataset, a task requiring social and emotional commonsense reasoning.

TinyGAN: Distilling BigGAN for Conditional Image Generation

1 code implementation29 Sep 2020 Ting-Yun Chang, Chi-Jen Lu

Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling.

Conditional Image Generation Knowledge Distillation

xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks

1 code implementation10 Sep 2018 Ting-Yun Chang, Ta-Chung Chi, Shang-Chi Tsai, Yun-Nung Chen

This paper focuses on interpreting the embeddings for various aspects, including sense separation in the vector dimensions and definition generation.

Word Embeddings Word Sense Disambiguation

Cannot find the paper you are looking for? You can Submit a new open access paper.