Search Results for author: Kieran A. Murphy

Found 7 papers, 4 papers with code

Machine-learning optimized measurements of chaotic dynamical systems via the information bottleneck

no code implementations8 Nov 2023 Kieran A. Murphy, Dani S. Bassett

Deterministic chaos permits a precise notion of a "perfect measurement" as one that, when obtained repeatedly, captures all of the information created by the system's evolution with minimal redundancy.

Time Series

Intrinsically motivated graph exploration using network theories of human curiosity

1 code implementation11 Jul 2023 Shubhankar P. Patankar, Mathieu Ouellet, Juan Cervino, Alejandro Ribeiro, Kieran A. Murphy, Dani S. Bassett

The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by nodes visited in the environment.

Recommendation Systems reinforcement-learning

Information decomposition in complex systems via machine learning

1 code implementation10 Jul 2023 Kieran A. Murphy, Dani S. Bassett

Guided by the distributed information bottleneck as a learning objective, the information decomposition identifies the variation in the measurements of the system state most relevant to specified macroscale behavior.

Interpretability with full complexity by constraining feature information

no code implementations30 Nov 2022 Kieran A. Murphy, Dani S. Bassett

Borrowing from information theory, we use the Distributed Information Bottleneck to find optimal compressions of each feature that maximally preserve information about the output.

feature selection Interpretable Machine Learning

The Distributed Information Bottleneck reveals the explanatory structure of complex systems

no code implementations15 Apr 2022 Kieran A. Murphy, Dani S. Bassett

The Distributed Information Bottleneck throttles the downstream complexity of interactions between the components of the input, deconstructing a relationship into meaningful approximations found through deep learning without requiring custom-made datasets or neural network architectures.

Learning ABCs: Approximate Bijective Correspondence for isolating factors of variation with weak supervision

1 code implementation CVPR 2022 Kieran A. Murphy, Varun Jampani, Srikumar Ramalingam, Ameesh Makadia

We propose a novel algorithm that utilizes a weak form of supervision where the data is partitioned into sets according to certain inactive (common) factors of variation which are invariant across elements of each set.

Data Augmentation Pose Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.