Search Results for author: Teresa Dorszewski

Found 3 papers, 0 papers with code

Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks

no code implementations10 Sep 2024 Teresa Dorszewski, Lenka Tětková, Lorenz Linhardt, Lars Kai Hansen

Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems.

How Redundant Is the Transformer Stack in Speech Representation Models?

no code implementations10 Sep 2024 Teresa Dorszewski, Albert Kjøller Jacobsen, Lenka Tětková, Lars Kai Hansen

Our findings reveal a block-like structure of high similarity, suggesting two main processing steps and significant redundancy of layers.

Knowledge Distillation Speaker Identification +2

Convexity-based Pruning of Speech Representation Models

no code implementations16 Aug 2024 Teresa Dorszewski, Lenka Tětková, Lars Kai Hansen

Recent work has shown that there is significant redundancy in the transformer models for NLP and massive layer pruning is feasible (Sajjad et al., 2023).

Keyword Spotting Self-Supervised Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.