Search Results for author: Matko Bosnjak

Found 7 papers, 2 papers with code

Formalising Concepts as Grounded Abstractions

no code implementations13 Jan 2021 Stephen Clark, Alexander Lerchner, Tamara von Glehn, Olivier Tieleman, Richard Tanburn, Misha Dashevskiy, Matko Bosnjak

The mathematics of partial orders and lattices is a standard tool for modelling conceptual spaces (Ch. 2, Mitchell (1997), Ganter and Obiedkov (2016)); however, there is no formal work that we are aware of which defines a conceptual lattice on top of a representation that is induced using unsupervised deep learning (Goodfellow et al., 2016).

Representation Learning

Neural Variational Inference For Estimating Uncertainty in Knowledge Graph Embeddings

1 code implementation12 Jun 2019 Alexander I. Cowen-Rivers, Pasquale Minervini, Tim Rocktaschel, Matko Bosnjak, Sebastian Riedel, Jun Wang

Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data.

Knowledge Graph Embeddings Knowledge Graphs +2

Scalable Neural Theorem Proving on Knowledge Bases and Natural Language

no code implementations ICLR 2019 Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Edward Grefenstette, Sebastian Riedel

Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.

Automated Theorem Proving Link Prediction +2

Towards Neural Theorem Proving at Scale

no code implementations21 Jul 2018 Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Sebastian Riedel

Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest.

Automated Theorem Proving Representation Learning

SCAN: Learning Hierarchical Compositional Visual Concepts

no code implementations ICLR 2018 Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner

SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner.

Cannot find the paper you are looking for? You can Submit a new open access paper.