no code implementations • 30 Oct 2024 • Søren Vejlgaard Holm, Lars Kai Hansen, Martin Carsten Nielsen
The language technology moonshot moment of Generative, Large Language Models (GLLMs) was not limited to English: These models brought a surge of technological applications, investments and hype to low-resource languages as well.
1 code implementation • 3 Oct 2024 • Gustav Wagner Zakarias, Lars Kai Hansen, Zheng-Hua Tan
BiSSL formulates the pretext and downstream task objectives as the lower- and upper-level objectives in a bilevel optimization problem and serves as an intermediate training stage within the self-supervised learning pipeline.
no code implementations • 10 Sep 2024 • Teresa Dorszewski, Lenka Tětková, Lorenz Linhardt, Lars Kai Hansen
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems.
no code implementations • 10 Sep 2024 • Teresa Dorszewski, Albert Kjøller Jacobsen, Lenka Tětková, Lars Kai Hansen
Our findings reveal a block-like structure of high similarity, suggesting two main processing steps and significant redundancy of layers.
no code implementations • 16 Aug 2024 • Teresa Dorszewski, Lenka Tětková, Lars Kai Hansen
Recent work has shown that there is significant redundancy in the transformer models for NLP and massive layer pruning is feasible (Sajjad et al., 2023).
1 code implementation • 15 Aug 2024 • Anders Gjølbye, Lina Skerath, William Lehn-Schiøler, Nicolas Langer, Lars Kai Hansen
Electroencephalography (EEG) research typically focuses on tasks with narrowly defined objectives, but recent studies are expanding into the use of unlabeled data within larger models, aiming for a broader range of applications.
no code implementations • 14 Jun 2024 • Lenka Tětková, Erik Schou Dreier, Robin Malm, Lars Kai Hansen
In this work, we are using grain data and the goal is to detect diseases and damages.
1 code implementation • 10 Apr 2024 • Lenka Tětková, Teresa Karen Scheidt, Maria Mandrup Fogh, Ellen Marie Gaunby Jørgensen, Finn Årup Nielsen, Lars Kai Hansen
Concept-based explainable AI is promising as a tool to improve the understanding of complex models at the premises of a given user, viz.\ as a tool for personalized explainability.
1 code implementation • 30 Nov 2023 • Beatrix M. G. Nielsen, Lars Kai Hansen
We find that when hubness is high, we can reduce error rate and hubness using hubness reduction methods.
1 code implementation • 24 Jul 2023 • Anders Gjølbye, William Lehn-Schiøler, Áshildur Jónsdóttir, Bergdís Arnardóttir, Lars Kai Hansen
Deep learning models are complex due to their size, structure, and inherent randomness in training procedures.
2 code implementations • 5 Jun 2023 • Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Mortensen, Lau Lilleholt, Anna Rogers, Ingo Zettler, Sune Lehmann
We can also represent human lives in a way that shares this structural similarity to language.
no code implementations • 1 Jun 2023 • Sarthak Yadav, Sergios Theodoridis, Lars Kai Hansen, Zheng-Hua Tan
In this work, we propose a Multi-Window Masked Autoencoder (MW-MAE) fitted with a novel Multi-Window Multi-Head Attention (MW-MHA) module that facilitates the modelling of local-global interactions in every decoder transformer block through attention heads of several distinct local and global windows.
no code implementations • 26 May 2023 • Lenka Tětková, Thea Brüsch, Teresa Karen Scheidt, Fabian Martin Mager, Rasmus Ørtoft Aagaard, Jonathan Foldager, Tommy Sonne Alstrøm, Lars Kai Hansen
G{\"a}rdenfors' conceptual spaces is a prominent framework for understanding human representations.
1 code implementation • 18 Apr 2023 • Lenka Tětková, Lars Kai Hansen
As the use of deep neural networks continues to grow, understanding their behaviour has become more crucial than ever.
no code implementations • 14 Jan 2023 • Jonathan Foldager, Mikkel Jordahn, Lars Kai Hansen, Michael Riis Andersen
In this work, we provide an extensive study of the relationship between the BO performance (regret) and uncertainty calibration for popular surrogate models and compare them across both synthetic and real-world experiments.
no code implementations • 25 Sep 2021 • Raluca Alexandra Fetic, Mikkel Jordahn, Lucas Chaves Lima, Rasmus Arpe Fogh Egebæk, Martin Carsten Nielsen, Benjamin Biering, Lars Kai Hansen
We then observe how the cosine similarities decrease as transcription noise increases and conclude that even when automatic speech recognition transcripts are erroneous, it is still possible to obtain high-quality topic embeddings from the transcriptions.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 5 Jul 2021 • Petr Taborsky, Lars Kai Hansen
Instead, we show that good generalization may be instigated by bounded spectral products over layers leading to a novel geometric regularizer.
no code implementations • 13 Jul 2020 • Laura Rieger, Lars Kai Hansen
With machine learning models being used for more sensitive applications, we rely on interpretability methods to prove that no discriminating attributes were used for classification.
no code implementations • 16 Jun 2020 • Jeppe Nørregaard, Lars Kai Hansen
In this paper we develop a principled, probabilistic, unified approach to non-standard classification tasks, such as semi-supervised, positive-unlabelled, multi-positive-unlabelled and noisy-label learning.
no code implementations • 26 Apr 2020 • Christoffer Riis, Damian Konrad Kowalczyk, Lars Kai Hansen
In this paper, we revisit the popularity prediction on Instagram.
1 code implementation • 9 Mar 2020 • Laura Rieger, Lars Kai Hansen
The adoption of machine learning in health care hinges on the transparency of the used algorithms, necessitating the need for explanation methods.
1 code implementation • 12 Feb 2020 • Jia Qian, Lars Kai Hansen, Xenofon Fafoutis, Prayag Tiwari, Hari Mohan Pandey
Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge.
no code implementations • 25 Sep 2019 • Laura Rieger, Lars Kai Hansen
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.
no code implementations • 25 Jun 2019 • Jia Qian, Sayantan Sengupta, Lars Kai Hansen
All the requirements to realize the convergence is integrated on the Fog Platform.
no code implementations • 29 May 2019 • Jeppe Nørregaard, Lars Kai Hansen
We investigate probabilistic decoupling of labels supplied for training, from the underlying classes for prediction.
1 code implementation • 2 May 2019 • Niels Bruun Ipsen, Lars Kai Hansen
It has been shown that learning signal structure in terms of principal components is dependent on the ratio of sample size and dimensionality and that a critical number of observations is needed before learning starts (Biehl and Mietzner, 1993).
no code implementations • 1 Mar 2019 • Laura Rieger, Lars Kai Hansen
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.
no code implementations • 7 Feb 2018 • Simon Kamronn, Andreas Trier Poulsen, Lars Kai Hansen
Correlated component analysis as proposed by Dmochowski et al. (2012) is a tool for investigating brain process similarity in the responses to multiple views of a given stimulus.
1 code implementation • ICLR 2018 • Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg
Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space.
no code implementations • 2 Oct 2017 • Albert Vilamala, Kristoffer Hougaard Madsen, Lars Kai Hansen
Functional Magnetic Resonance Imaging (fMRI) relies on multi-step data processing pipelines to accurately determine brain activity; among them, the crucial step of spatial smoothing.
no code implementations • 17 Apr 2017 • Rasmus S. Andersen, Anders U. Eliasen, Nicolai Pedersen, Michael Riis Andersen, Sofie Therese Hansen, Lars Kai Hansen
In this work we explore the generality of Edelman et al. hypothesis by considering decoding of face recognition.
Neurons and Cognition
no code implementations • 13 Oct 2016 • Albert Vilamala, Kristoffer Hougaard Madsen, Lars Kai Hansen
The study of neurocognitive tasks requiring accurate localisation of activity often rely on functional Magnetic Resonance Imaging, a widely adopted technique that makes use of a pipeline of data processing modules, each involving a variety of parameters.
no code implementations • NeurIPS 2016 • Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg
The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric.
no code implementations • 9 Oct 2015 • Søren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John W. Fisher III, Lars Kai Hansen
We then learn a class-specific probabilistic generative models of the transformations in a Riemannian submanifold of the Lie group of diffeomorphisms.
no code implementations • 15 Sep 2015 • Michael Riis Andersen, Aki Vehtari, Ole Winther, Lars Kai Hansen
In this work, we address the problem of solving a series of underdetermined linear inverse problems subject to a sparsity constraint.
no code implementations • 19 Aug 2015 • Michael Riis Andersen, Ole Winther, Lars Kai Hansen
We are interested in solving the multiple measurement vector (MMV) problem for instances, where the underlying sparsity pattern exhibit spatio-temporal structure motivated by the electroencephalogram (EEG) source localization problem.
no code implementations • 27 May 2014 • Rasmus Troelsgård, Bjørn Sand Jensen, Lars Kai Hansen
Calculating similarities between objects defined by many heterogeneous data modalities is an important challenge in many multimedia applications.
no code implementations • 27 Nov 2013 • Bjarne Ørum Fruergaard, Toke Jansen Hansen, Lars Kai Hansen
In this work, we propose to use dimensionality reduction of the user-website interaction graph in order to produce simplified features of users and websites that can be used as predictors of clickthrough rate.
no code implementations • 18 Oct 2013 • Jerónimo Arenas-García, Kaare Brandt Petersen, Gustavo Camps-Valls, Lars Kai Hansen
Feature extraction and dimensionality reduction are important tasks in many fields of science dealing with signal processing and analysis.