Search Results for author: Klaudia Bałazy

Found 8 papers, 8 papers with code

Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRA

1 code implementation17 Feb 2025 Patryk Marszałek, Klaudia Bałazy, Jacek Tabor, Tomasz Kuśmierczyk

Low-Rank Adaptation (LoRA) enables parameter-efficient fine-tuning of large language models by decomposing weight updates into low-rank matrices, significantly reducing storage and computational overhead.

Computational Efficiency parameter-efficient fine-tuning +1

LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters

1 code implementation27 May 2024 Klaudia Bałazy, Mohammadreza Banaei, Karl Aberer, Jacek Tabor

The rapid expansion of large language models (LLMs) has underscored the need for parameter-efficient fine-tuning methods, with LoRA (Low-Rank Adaptation) emerging as a popular solution.

Benchmarking GSM8K +2

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

1 code implementation10 Feb 2023 Piotr Gaiński, Klaudia Bałazy

We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities.

Adversarial Text Quantization

Zero Time Waste: Recycling Predictions in Early Exit Neural Networks

1 code implementation NeurIPS 2021 Maciej Wołczyk, Bartosz Wójcik, Klaudia Bałazy, Igor Podolak, Jacek Tabor, Marek Śmieja, Tomasz Trzciński

The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications.

Finding the Optimal Network Depth in Classification Tasks

1 code implementation17 Apr 2020 Bartosz Wójcik, Maciej Wołczyk, Klaudia Bałazy, Jacek Tabor

We develop a fast end-to-end method for training lightweight neural networks using multiple classifier heads.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.