Search Results for author: Mateja Jamnik

Found 18 papers, 6 papers with code

Do Concept Bottleneck Models Learn as Intended?

no code implementations10 May 2021 Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller

Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.

Failing Conceptually: Concept-Based Explanations of Dataset Shift

1 code implementation18 Apr 2021 Maleakhi A. Wijaya, Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik

Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.

Is Disentanglement all you need? Comparing Concept-based & Disentanglement Approaches

1 code implementation14 Apr 2021 Dmitry Kazhdan, Botty Dimanov, Helena Andres Terre, Mateja Jamnik, Pietro Liò, Adrian Weller

Concept-based explanations have emerged as a popular way of extracting human-interpretable representations from deep discriminative models.

Improving Interpretability in Medical Imaging Diagnosis using Adversarial Training

1 code implementation2 Dec 2020 Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik, Adrian Weller

We investigate the influence of adversarial training on the interpretability of convolutional neural networks (CNNs), specifically applied to diagnosing skin cancer.

Using ontology embeddings for structural inductive bias in gene expression data analysis

no code implementations22 Nov 2020 Maja Trębacz, Zohreh Shams, Mateja Jamnik, Paul Scherer, Nikola Simidjievski, Helena Andres Terre, Pietro Liò

Stratifying cancer patients based on their gene expression levels allows improving diagnosis, survival analysis and treatment planning.

Survival Analysis

Pairwise Relations Discriminator for Unsupervised Raven's Progressive Matrices

1 code implementation2 Nov 2020 Nicholas Quek Wei Kiat, Duo Wang, Mateja Jamnik

PRD reframes the RPM problem into a relation comparison task, which we can solve without requiring the labelling of the RPM problem.

Now You See Me (CME): Concept-based Model Extraction

1 code implementation25 Oct 2020 Dmitry Kazhdan, Botty Dimanov, Mateja Jamnik, Pietro Liò, Adrian Weller

Deep Neural Networks (DNNs) have achieved remarkable performance on a range of tasks.

Model extraction

Incorporating network based protein complex discovery into automated model construction

no code implementations29 Sep 2020 Paul Scherer, Maja Trȩbacz, Nikola Simidjievski, Zohreh Shams, Helena Andres Terre, Pietro Liò, Mateja Jamnik

We propose a method for gene expression based analysis of cancer phenotypes incorporating network biology knowledge through unsupervised construction of computational graphs.

Learned Low Precision Graph Neural Networks

no code implementations19 Sep 2020 Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, Pietro Lio

LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round.

Abstract Diagrammatic Reasoning with Multiplex Graph Networks

no code implementations ICLR 2020 Duo Wang, Mateja Jamnik, Pietro Lio

We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM).

Visual Reasoning

Extrapolatable Relational Reasoning With Comparators in Low-Dimensional Manifolds

no code implementations15 Jun 2020 Duo Wang, Mateja Jamnik, Pietro Lio

We show that neural nets with this inductive bias achieve considerably better o. o. d generalisation performance for a range of relational reasoning tasks.

Relational Reasoning

Probabilistic Dual Network Architecture Search on Graphs

no code implementations21 Mar 2020 Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik

We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).

Structural Inductive Biases in Emergent Communication

no code implementations4 Feb 2020 Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden, Christopher Pal

In order to communicate, humans flatten a complex representation of ideas and their attributes into a single word or a sentence.

Representation Learning

Towards Graph Representation Learning in Emergent Communication

no code implementations24 Jan 2020 Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, Mateja Jamnik, Sean B. Holden

Recent findings in neuroscience suggest that the human brain represents information in a geometric structure (for instance, through conceptual spaces).

Graph Representation Learning

Decoupling feature propagation from the design of graph auto-encoders

no code implementations18 Oct 2019 Paul Scherer, Helena Andres-Terre, Pietro Lio, Mateja Jamnik

We present two instances, L-GAE and L-VGAE, of the variational graph auto-encoding family (VGAE) based on separating feature propagation operations from graph convolution layers typically found in graph learning methods to a single linear matrix computation made prior to input in standard auto-encoder architectures.

Graph Learning Graph Representation Learning +1

Unsupervised and interpretable scene discovery with Discrete-Attend-Infer-Repeat

no code implementations14 Mar 2019 Duo Wang, Mateja Jamnik, Pietro Lio

In this work we present Discrete Attend Infer Repeat (Discrete-AIR), a Recurrent Auto-Encoder with structured latent distributions containing discrete categorical distributions, continuous attribute distributions, and factorised spatial attention.

Cannot find the paper you are looking for? You can Submit a new open access paper.