Search Results for author: Natalia Frumkin

Found 3 papers, 1 papers with code

Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers

1 code implementation ICCV 2023 Natalia Frumkin, Dibakar Gope, Diana Marculescu

Evol-Q improves the top-1 accuracy of a fully quantized ViT-Base by $10. 30\%$, $0. 78\%$, and $0. 15\%$ for $3$-bit, $4$-bit, and $8$-bit weight quantization levels.

Quantization

MobileTL: On-device Transfer Learning with Inverted Residual Blocks

no code implementations5 Dec 2022 Hung-Yueh Chiang, Natalia Frumkin, Feng Liang, Diana Marculescu

MobileTL trains the shifts for internal normalization layers to avoid storing activation maps for the backward pass.

Transfer Learning

CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers

no code implementations17 Nov 2022 Natalia Frumkin, Dibakar Gope, Diana Marculescu

Borrowing the idea of contrastive loss from self-supervised learning, we find a robust way to jointly minimize a loss function using just 1, 000 calibration images.

Quantization Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.