no code implementations • 5 Jan 2023 • Luis Sacouto, Andreas Wichert
Yet, associative memory can only work with logarithmic sparse representations, which makes it extremely difficult to apply the model to real data.
no code implementations • 25 Nov 2022 • Luis Sa-Couto, Jose Miguel Ramos, Andreas Wichert
We call it the output sharpness, and it is based on the fact that, in reality, boundaries between concepts are generally unsharp.
no code implementations • 18 Nov 2022 • Luis Sa-Couto, Jose Miguel Ramos, Miguel Almeida, Andreas Wichert
The theory of bias-variance used to serve as a guide for model selection when applying Machine Learning algorithms.
no code implementations • 26 Oct 2022 • Jose Miguel Ramos, Luis Sa-Couto, Andreas Wichert
A vast majority of the current research in the field of Machine Learning is done using algorithms with strong arguments pointing to their biological implausibility such as Backpropagation, deviating the field's focus from understanding its original organic inspiration to a compulsive search for optimal performance.
no code implementations • 20 Jul 2022 • Maria Osório, Luís Sa-Couto, Andreas Wichert
In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm.
no code implementations • 11 Jul 2022 • Rodrigo Simas, Luis Sa-Couto, Andreas Wichert
Using our recently proposed sparse coding prescription for visual patterns, this model can store and retrieve an impressive amount of real-world data in a fault-tolerant manner.
no code implementations • 30 Apr 2021 • Luis Sa-Couto, Andreas Wichert
For these reasons it is still a key research question to look into computational principles in the brain that can help guide models to unsupervisedly learn good representations which can then be used to perform tasks like classification.
no code implementations • 21 Feb 2020 • Catarina Moreira, Renuka Sindhgatta, Chun Ouyang, Peter Bruza, Andreas Wichert
We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.
Decision Making Interpretability Techniques for Deep Learning
no code implementations • 11 May 2019 • Catarina Moreira, Lauren Fell, Shahram Dehdashti, Peter Bruza, Andreas Wichert
We propose an alternative and unifying framework for decision-making that, by using quantum mechanics, provides more generalised cognitive and decision models with the ability to represent more information than classical models.
no code implementations • 16 Jul 2018 • Catarina Moreira, Andreas Wichert
The general idea is to take advantage of the quantum interference terms produced in the quantum-like Bayesian Network to influence the probabilities used to compute the expected utility of some action.
no code implementations • 2 Oct 2017 • Catarina Moreira, Emmanuel Haven, Sandro Sozzo, Andreas Wichert
In this work, we analyse and model a real life financial loan application belonging to a sample bank in the Netherlands.
no code implementations • 26 Aug 2015 • Catarina Moreira, Andreas Wichert
We analyse a quantum-like Bayesian Network that puts together cause/effect relationships and semantic similarities between events.
no code implementations • 12 Feb 2015 • Andreas Wichert, Catarina Moreira
We investigate exact indexing for high dimensional Lp norms based on the 1-Lipschitz property and projection operators.
no code implementations • 6 Feb 2015 • Luís Tarrataca, Andreas Wichert
The production system is a theoretical model of computation relevant to the artificial intelligence field allowing for problem solving procedures such as hierarchical tree search.
no code implementations • 30 Sep 2014 • Catarina Moreira, Andreas Wichert
This means that probabilistic graphical models based on classical probability theory are too limited to fully simulate and explain various aspects of human decision making.
no code implementations • 12 Jun 2013 • Catarina Moreira, Andreas Wichert
To deal with these conflicts, we applied the Dempster-Shafer theory of evidence combined with Shannon's Entropy formula to fuse this information and come up with a more accurate and reliable final ranking list.