Activation patching studies the model's computation by altering its latent representations, the token embeddings in transformer-based language models, during the inference process
Source: Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language ModelsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Time Series Forecasting | 6 | 10.71% |
Language Modelling | 3 | 5.36% |
Vulnerability Detection | 2 | 3.57% |
Question Answering | 2 | 3.57% |
Safety Alignment | 2 | 3.57% |
Image Classification | 2 | 3.57% |
Multivariate Time Series Forecasting | 2 | 3.57% |
C++ code | 1 | 1.79% |
Code Translation | 1 | 1.79% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |