no code implementations • 10 Sep 2024 • Teresa Dorszewski, Albert Kjøller Jacobsen, Lenka Tětková, Lars Kai Hansen
Our findings reveal a block-like structure of high similarity, suggesting two main processing steps and significant redundancy of layers.
no code implementations • 10 Sep 2024 • Teresa Dorszewski, Lenka Tětková, Lorenz Linhardt, Lars Kai Hansen
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems.
no code implementations • 16 Aug 2024 • Teresa Dorszewski, Lenka Tětková, Lars Kai Hansen
Recent work has shown that there is significant redundancy in the transformer models for NLP and massive layer pruning is feasible (Sajjad et al., 2023).
no code implementations • 14 Jun 2024 • Lenka Tětková, Erik Schou Dreier, Robin Malm, Lars Kai Hansen
In this work, we are using grain data and the goal is to detect diseases and damages.
1 code implementation • 10 Apr 2024 • Lenka Tětková, Teresa Karen Scheidt, Maria Mandrup Fogh, Ellen Marie Gaunby Jørgensen, Finn Årup Nielsen, Lars Kai Hansen
Concept-based explainable AI is promising as a tool to improve the understanding of complex models at the premises of a given user, viz.\ as a tool for personalized explainability.
no code implementations • 26 May 2023 • Lenka Tětková, Thea Brüsch, Teresa Karen Scheidt, Fabian Martin Mager, Rasmus Ørtoft Aagaard, Jonathan Foldager, Tommy Sonne Alstrøm, Lars Kai Hansen
G{\"a}rdenfors' conceptual spaces is a prominent framework for understanding human representations.
1 code implementation • 18 Apr 2023 • Lenka Tětková, Lars Kai Hansen
As the use of deep neural networks continues to grow, understanding their behaviour has become more crucial than ever.